Thanks to visit codestin.com
Credit goes to github.com

Skip to content

SanDiegoMachineLearning/talks

Repository files navigation

Below are the links for SDML presentations and videos. For SDML book club notes and videos, please see the SDML book club repo.

Please let us know if you have a topic you would like to share about in either a long (~30 minute) or short (~5-10 minutes) talk.

2025

ML Papers and Lightning Talks
Monthly discussions of interesting ML papers and other ML lightning talks

2024

ML Papers and Lightning Talks
Monthly discussions of interesting ML papers and other ML lightning talks

2023

ML Papers and Lightning Talks
Monthly discussions of interesting ML papers and other ML lightning talks

State of the Art in Knowledge Editing by Alex Loftus
October 21, 2023

  • Work across many subfields of machine learning has become increasingly reliant on billion parameter-scale models. If we can find methods to update these models without spending the time and computation necessary to run full fine-tuning sessions, we unlock customization previously only possible with industry-scale compute power. A parallel and related question is in interpretability: Where does knowledge live in these giant models? How is it stored, and can we find interpretable directions in their weight-space?
  • Join us for an exploration of the research surrounding these questions. We'll explore work coming out of Jacob Andreas' lab at MIT, Christopher Manning at Stanford, Jacob Steinhardt's lab at Berkeley, and Ludwig Schmidt's group at the University of Washington. We'll also look at some open-source work being done by EleutherAI as well as knowledge-editing work in natural language processing done by the Allen Institute for AI's Israeli team.
  • Alex is a data scientist working at a company using machine learning to speed up the drug discovery process. He holds an undergraduate degree in neuroscience with minors in chemistry and philosophy at Western Washington University, and a master's in biomedical engineering with a data science concentration from Johns Hopkins University.
  • Video

Contrastive Inference Methods by Sean O'Brien
October 16, 2023

  • The rapidly increasing scale of large language models (LLMs) and their training corpora has led to remarkable improvements in fluency, reasoning, and information recall. Still, these models are prone to hallucination and fundamental reasoning errors, and reliably eliciting desired behaviors from them can be challenging. The development of strategies like chain-of-thought and self-consistency have since demonstrated that training-free techniques can extract better behavior from existing models, launching a wave of research into prompting techniques. In this talk, we will examine a separate, emerging class of training-free techniques also intended to better control LLM behavior, which can be called contrastive inference methods. These techniques achieve improvements across a number of tasks by exploiting the behavioral differences between two different inference processes; in the case of contrastive decoding, between a large “expert” model and a smaller “amateur” model. While this talk will mostly focus on contrastive decoding, it will also introduce similar methods and discuss applications beyond the domain of text generation.
  • Sean O’Brien is a first-year Ph.D. student at UC San Diego, advised by Julian McAuley. In the past, he has researched dark matter search methods, chess strategy, and language model reasoning at UC Berkeley and Meta AI. Now he is interested in developing more reliable decoding methods for large language models, as well as exploring the application of these techniques to goal-oriented dialogue systems.
  • Slides and video

ICML Conference Highlights by Alex Loftus
September 2, 2023

  • Alex Loftus shares from his experience at this year's ICML Conference. This talk will be about new machine learning techniques in drug discovery and medicine presented at ICML. The general themes were molecule generation with diffusion; retrosynthesis, e.g., chemical synthesis planning; molecule representation learning; molecular property prediction; and LLMs for sequence infilling.
  • Alex is a data scientist working at a company using machine learning to speed up the drug discovery process. He holds an undergraduate degree in neuroscience with minors in chemistry and philosophy at Western Washington University, and a master's in biomedical engineering with a data science concentration from Johns Hopkins University.
  • Slides

Kaggle Competition Recap -- Identify Contrails to Reduce Global Warming by Ryan Chesler
August 19, 2023

  • Come hear about the recently completed Google Research - Identify Contrails to Reduce Global Warming competition on Kaggle. This free, virtual meetup will discuss the successful approaches used in this image segmentation competition.
  • From Kaggle: Contrails are clouds of ice crystals that form in aircraft engine exhaust. They can contribute to global warming by trapping heat in the atmosphere. Researchers have developed models to predict when contrails will form and how much warming they will cause. However, they need to validate these models with satellite imagery. Your work will help researchers improve the accuracy of their contrail models. This will help airlines avoid creating contrails and reduce their impact on climate change.
  • Ryan Chesler is a 2x Kaggle Grandmaster. He is an organizer of the San Diego Machine Learning meetup and a Data Scientist working for H2O.ai.

Kaggle Vesuvius Ink Detection -- Winning Solutions
June 17, 2023

  • Come hear from the winning teams from the recently completed Vesuvius Ink Detection competition on Kaggle. The first place team will share about their solution and discuss a meta analysis of what other teams tried and seemed to work. Additional teams will be added to present as their schedules permit.
  • From Kaggle: Join the $1,000,000+ Vesuvius Challenge to resurrect an ancient library from the ashes of a volcano. In this competition you are tasked with detecting ink from 3D X-ray scans and reading the contents. Thousands of scrolls were part of a library located in a Roman villa in Herculaneum, a town next to Pompeii. This villa was buried by the Vesuvius eruption nearly 2000 years ago. Due to the heat of the volcano, the scrolls were carbonized, and are now impossible to open without breaking them. There is a $700,000 grand prize available to the first team that can read these scrolls from a 3D X-ray scan.
  • The members of the winning team, Ryan Chesler, Aina Tersol, Alex Loftus, and Ted Kyi, are all San Diego Machine Learning regulars.
  • Video recording of the session
  • Intro slides
  • Technical slides

Language Model Foundations by Ted Kyi
April 1, 2023

  • With all the current hype surrounding large language models, we are going to take a step back in time and look at the GPT-3 paper Language Models are Few-Shot Learners (https://arxiv.org/abs/2005.14165). We will utilize video paper reviews, blogs, and architectural diagrams to provide a clear understanding of how GPT-style language models work. Come for a solid foundation how decoder-only transformers are used in language models, and to see how current discussions about LLMs echo the same debates that arose when GPT-3 came out.
  • Notes doc

Introduction to Transformers by Ted Kyi
February 18, 2023

  • After learning the architecture of the transformer, you know the parts and how they work. Through a coding example, pattern matching with a 1-D CNN will be compared to pattern matching using attention, with the goal of providing intuition as to why the self-attention mechanism makes neural networks so powerful.
  • Video and Jupyter notebook

2022

Hands-On Workshop on Training and Using Transformers by Ryan Chesler
Starting December 3, 2022 and running about 5 Saturdays (skipping the holidays)

Rigorous Statistics for Academics and Practitioners by Michal Fabinger
Michal is continuing Thursday evening lectures on statistics

  • Fill out this form to receive the notes and quizes by email (you only need to do this once, but it is different from the initial 4-part series): https://form.typeform.com/to/rep1RuEc
  • The first series was Shapes and Moments of Probability Distributions, on March 31 and April 7, 2022. The MLT recordings of this series are Part 1 and Part 2.
  • The second series is Dependence of random variables and conditional distributions, three parts starting April 14, 2022. Recordings from the Tokyo sessions are Part 1, Part 2, and Part 3.
  • The third series will be Estimators, asymptotic theory, and types of convergence of random variables, in two parts starting May 5, 2022. The session recordings from MLT are now available: Part 1 and Part 2.
  • The next series will be on Hypothesis testing, in two parts starting May 19, 2022.
  • The fifth series covers Linear regression models, in four parts starting June 2, 2022.

Rigorous Probability and Statistics by Michal Fabinger
February 8, February 15, February 22, and March 1, 2022

  • A 4-part series of lectures on Probability and Statistics starting from the beginning and proceeding in an intuitive, but mathematically rigorous way. (Similar Machine Learning lectures could also be scheduled.) The lectures should help Machine Learning practitioners and researchers to understand academic papers and to implement their methods. They should also help people pursuing academic paths in various scientific disciplines.
  • These lectures a being jointly hosted by Silicon Valley Hands On Programming Events (https://www.meetup.com/HandsOnProgrammingEvents/). They were also delivered to Machine Learning Tokyo.
  • Fill out this form to receive the notes and quizes by email: https://form.typeform.com/to/tYqIEqGN
  • The topics are:
    1. Feb. 8 meetup - Types of probability distributions and the need for a rigorous mathematical framework. Probability spaces, sample spaces, event spaces, and probability measures. Examples of probability spaces. Here is the video recording of the MLT lecture.
    2. Feb. 15 meetup - Sigma-algebras for events. Borel sigma-algebras for events corresponding to continuous sample spaces. Random variables. Examples of random variables. Here is the video of the MLT session.
    3. Feb. 22 - Distributions of random variables. Cumulative distribution functions, probability mass functions, and probability density functions. Examples of distributions. The video recording of the MLT lecture.
    4. Mar. 1 - Transformations of random variables. Transformations of cumulative distribution functions, probability mass functions, and probability density functions. Examples of usage of transformed random variables. Here's the video of the MLT meetup.

2021

State of the Art Machine Learning Algorithms for Tabular Data by Ryan Chesler
November 20, 2021

  • In this talk, Kaggle Grandmaster Ryan Chesler will discuss state of the art models used for tabular machine learning. He will explain the data preprocessing steps as well as the algorithms and how the two interact. He will discuss the nuts and bolts of the algorithms as well as some benchmarks showing the performance of the various different methods.
  • Video

Evaluating Robustness of Neural Networks by Lily Weng
October 28, 2021

  • The robustness of neural networks to adversarial examples has received great attention due to security implications. Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness. In this talk, Weng will introduce several robustness quantification frameworks for deep neural networks against both adversarial and non-adversarial input perturbations, including the first robustness score CLEVER, efficient certification algorithms Fast-Lin, CROWN, CNN-Cert, and probabilistic robustness verification algorithm PROVEN. The proposed approaches are computationally efficient and provide a good quality of robustness estimate/certificate as demonstrated by extensive experiments on MNIST, CIFAR and ImageNet.
  • Lily Weng is an Assistant Professor in the Halicioglu Data Science Institute at UC San Diego with an affiliation to the CSE department. She has broad research interest in the intersection of machine learning, optimization and reinforcement learning, with applications in cybersecurity and healthcare.
  • Video

The New DBfication of ML/AI by Arun Kumar
September 22, 2021

  • The recent boom in ML/AI applications has brought into sharp focus the pressing need for tackling the concerns of scalability, usability, and manageability across the entire lifecycle of ML/AI applications. The ML/AI world has long studied the concerns of accuracy, automation, etc. from theoretical and algorithmic vantage points. But to truly democratize ML/AI, the vantage point of building and deploying practical systems is equally critical.
    In this talk, Professor Kumar will make the case that it is high time to bridge the gap between the ML/AI world and a world that exemplifies successful democratization of data technology: databases. He will show how new bridges rooted in the principles, techniques, and tools of the database world are helping tackle the above pressing concerns and in turn, posing new research questions to the world of ML/AI. As case studies of such bridges, he will describe two lines of work from his group: query optimization for ML systems and benchmarking data preparation in AutoML platforms. He will conclude with his thoughts on community mechanisms to foster more such bridges between research worlds and between research and practice.
  • Arun Kumar is an Assistant Professor in the Department of Computer Science and Engineering and the Halicioglu Data Science Institute and an HDSI Faculty Fellow at the University of California, San Diego. He is a member of the Database Lab and Center for Networked Systems and an affiliate member of the AI Group.
  • Slides and video

Learning with Distributed Data by Arya Mazumdar
August 25, 2021

  • In recent years, large-scale training for machine learning models mandatorily takes place in a massively large distributed system composed of individual computational nodes (e.g., ranges from GPUs to low-end commodity hardware). Such distributed systems are inherently constrained by communication. For example, a computational node 1) may not be able to communicate local information due to limited bandwidth 2) may not want to share information to maintain privacy 2) may intentionally corrupt information as an adversarial attack. What is the compromise in the convergence rate of an optimization algorithm due to these communication constraints? We explore these trade-offs and show that first and second order methods of optimization still work under all such heavy information constraints.
  • Arya Mazumdar is an associate professor in the Halicioglu Data Science Institute of University of California San Diego, with additional affiliation to the Departments of Computer Science and Electrical Engineering.
  • Video

Discrete Morse-based Graph Skeletonization and Data Analysis by Yusu Wang
July 28, 2021

  • In recent years, topological and geometric data analysis (TGDA) has emerged as a new and promising field for processing, analyzing and understanding complex data. Indeed, geometry and topology form natural platforms for data analysis, with geometry describing the ”shape” and ”structure” behind data; and topology characterizing / summarizing both the domain where data are sampled from, as well as functions and maps associated to them. In this talk, Yusu will show how the topological objects and ideas can be combined with algorithmic developments to lead to new approaches for inferring hidden graph skeleton structure behind (low and high dimensional) data; as well as how they can be combined with machine learning pipelines for further data analysis tasks (e.g., to neuroscience and to material science). This talk is based on multiple projects with multiple collaborators and references will be given during the talk.
  • Yusu Wang is currently a Professor in the Halicioglu Data Science Institute at University of California, San Diego. Prior to joining UCSD, she was a Professor in the Computer Science and Engineering Department at the Ohio State University, where she also co-directed the Foundation of Data Science Research CoP at Translational Data Analytics Institute (TDAI)@OSU from 2018-2020.
  • Video

A friendly introduction to PySpark MLlib (and a taste of MLFlow) by Michelle Hoogenhout
July 17, 2021

  • Doing data science at scale? PySpark and MLlib bring the power of Spark's distributed processing to python users so that you can train machine learning models on massive datasets. MLlib provides tools for data extraction, transformation and loading, common ML algorithms, and model evaluation. And with the addition of MLFlow, it's easier than ever to log, reproduce and deploy your ML models. This walkthrough is aimed at those new to MLflow, and will take you through the ML lifecycle with PySpark's ML toolset.
  • Michelle Hoogenhout is a data scientist with a background in cognitive neuroscience and experimental design. She is a senior data science and analytics instructor at Galvanize and co-founder of Ingane Health, a data science consulting firm. Michelle holds a PhD (Psychology) from the University of Cape Town, South Africa and has published on topics such as statistics and data management, data science training methods, ethics, and cognitive and physiological assessment.
  • The recording is available at this video link

Causal Algorithmic Fairness and Transparency by Babak Salimi
May 26, 2021

  • Scaling and democratizing access to big data promises to provide meaningful and actionable information that supports decision-making. Today, data-driven decisions profoundly affect the course of our lives, such as whether to admit applicants to a particular school, offer them a job, or grant them a mortgage. Unfair, inconsistent, or faulty decision-making raises serious concerns about ethics and responsibility. For example, we may know that our training data is biased, but how do we avoid propagating discrimination when we use this data? How do we avoid incorrect, spurious and non-reproducible findings? How can we curate and expose existing data to make it "safe" for informed decision-making?
    In this talk, Babak will describe how we can combine techniques from causal inference and data management to develop systems and algorithms that help answer questions about fairness and transparency of algorithmic systems. First, he will present a new notion of fairness that subsumes and improves upon previous definitions and correctly distinguishes between fairness violations and non-violations. Further, he will discuss how we can leverage techniques from data management to remove historical discrimination from data. Second he will present a novel declarative framework that enables reasoning about fairness and discrimination from complex relational data. Finally, he will present his most recent work that exploits counterfactual reasoning for explaining black-box decision-making algorithms.
  • Babak Salimi is an assistant professor in HDSI at UC San Diego. Before joining UC San Diego, he was a postdoctoral research associate in the Department of Computer Science and Engineering, University of Washington where he worked with Prof. Dan Suciu and the database group.
  • Video

Take a Hack at COVID! by Benjamin Smarr
April 21, 2021

  • Professor Smarr is the technical lead on TemPredict, an international research effort aimed at building deployable algorithms for COVID detection and health monitoring. TemPredict gathered wearable and survey data from ~65,000 global participants. Professor Smarr will share some early insights, and highlight opportunities for interested hackers to get involved in future analyses.
  • Benjamin Smarr is an assistant professor at the Halicioğlu Data Science Institute and the Department of Bioengineering at the University of California, San Diego. As an NIH fellow at UC Berkeley he developed techniques for extracting health and performance predictors from repeated, longitudinal physiological measurements. Historically his work has focused on neuroendocrine control and women’s health, including demonstrations of pregnancy detection and outcome prediction, neural control of ovulation, and the importance of circadian rhythms in healthy in utero development.
  • Slides and video

Explaining by Removing: A Unified Framework for Model Explanation by Ian Covert
April 13, 2021

  • Researchers have proposed a wide variety of model explanation approaches, but it remains unclear how most methods are related or when one method is preferable to another. In this talk we'll discuss a new class of methods, removal-based explanations, that are based on the principle of simulating feature removal to quantify each feature’s influence. These methods vary in several respects, so we develop a framework that characterizes each method along three dimensions: 1) how the method removes features, 2) what model behavior the method explains, and 3) how the method summarizes each feature’s influence. Our framework unifies 25 existing methods, including several of the most widely used approaches (SHAP, LIME, Meaningful Perturbations, permutation tests), and it helps us reveal underlying connections with fields such as cognitive psychology, game theory and information theory.
  • Ian Covert is a PhD student at the University of Washington, where he is advised by Su-In Lee and collaborates with Scott Lundberg. His research focuses on explainable machine learning and the applications of these tools to problems in biology and medicine. Previously, he was a student researcher at Google Brain and completed his bachelors degree at Columbia University.
  • Slides and video

AutoNet: Automated Network Construction from Massive Text Corpora by Jingbo Shang
March 24, 2021

  • Mining structured knowledge from massive unstructured text data is a key challenge in data science. In this talk, I will discuss my proposed framework, AutoNet, that transforms unstructured text data into structured heterogeneous information networks, on which actionable knowledge can be further uncovered flexibly and effectively. AutoNet is a data-driven approach using distant supervision instead of human curation and labeling. It consists of four essential steps: (1) quality phrase mining; (2) entity recognition and typing; (3) relation extraction; and (4) taxonomy construction.
  • Jingbo Shang is an Assistant Professor in Computer Science Engineering and Halıcıoğlu Data Science Institute at UC San Diego. His research has been recognized by many prestigious awards, including the Grand Prize of Yelp Dataset Challenge in 2015, Google Ph.D. Fellowship in Structured Data and Database Management in 2017, SIGKDD Dissertation Award Runner-up in 2020.
  • Video has been uploaded to YouTube

Rainforest Connection Species Audio Detection - A Kaggle Competition Recap by Ryan Chesler
March 13, 2021

  • Come hear about the Rainforest Connection Species Audio Detection competition on Kaggle (https://www.kaggle.com/c/rfcx-species-audio-detection). This free, virtual meetup will discuss the recently completed competition.
  • From Kaggle: In this competition, you’ll automate the detection of bird and frog species in tropical soundscape recordings. You'll create your models with limited, acoustically complex training data. Rich in more than bird and frog noises, expect to hear an insect or two, which your model will need to filter out.
  • Ryan is an organizer of the San Diego Machine Learning meetup and a Data Scientist working for H2O.ai developing automated machine learning systems. He is a Kaggle grand master and self-driving car enthusiast.
  • Slides and video

2020

Investment Analytics in the Dawn of Artificial Intelligence by Bernard Lee
December 12, 2020

  • This talk attempts to answer a few questions:
    1. Why artificial intelligence/machine learning (AI/ML) is helpful to certain fintech apps?
    2. When AI/ML works well in fintech? When it may not work so well?
    3. Apart from interesting theories, what are the practical prerequisites for AI/ML applications to perform well in real-life finance, to the point of replacing jobs?
    4. Where society may end up if trusted AI/ML applications in fintech become widely available?
    5. What may be the potential implications for me, my financial future, and my career?
  • Bernard Lee is the founder/CEO of HedgeSPA, whose mission is to revolutionize the landscape of professional investment analytics by democratizing access to the most sophisticated B2B investment analytics tool. HedgeSPA's core investment platform is unique in its utilization of Artificial Intelligence, Big Data, and High-Performance/Quantum Computing.
  • Slides can be downloaded, however please contact Ted, the meetup organizer, for the password.

Neuroscience in the data science age by Bradley Voytek
December 8, 2020

  • The brain is often likened to a symphony, where 86 billion neurons are coordinating in an unfathomably complex electrochemical orchestra. However, our brains are more like a symphony without a conductor: there is no leader orchestrating those 86 billion neurons! Despite this apparent chaos, our brains usually just work (if we're lucky!). My research lab leverages a data science approach to neuroscience in order to understand how these 86 billion neurons communicate with one another, and to figure out when, why, and how that process breaks down.
  • Bradley Voytek is an Associate Professor in the Department of Cognitive Science, the Halıcıoğlu Data Science Institute, and the Neurosciences Graduate Program at UC San Diego. He is both an Alfred P. Sloan Neuroscience Research Fellow and a Kavli Fellow of the National Academies of Sciences, as well as a founding faculty member of the UC San Diego Halıcıoğlu Data Science Institute and the Undergraduate Data Science program, where he serves as Vice-Chair.
  • Slides and video

Lyft Motion Prediction for Autonomous Vehicles - A Kaggle Competition Recap by Ryan Chesler
November 28, 2020

  • Come hear about the Lyft Motion Prediction for Autonomous Vehicles competition on Kaggle (https://www.kaggle.com/c/lyft-motion-prediction-autonomous-vehicles). This virtual meetup is scheduled just days after the competition closes.
  • From Kaggle: In this competition, you’ll apply your data science skills to build motion prediction models for self-driving vehicles. You'll have access to the largest Prediction Dataset ever released to train and test your models. Your knowledge of machine learning will then be required to predict how cars, cyclists, and pedestrians move in the AV's environment.
  • Ryan is an organizer of the San Diego Machine Learning meetup and a Data Scientist working for H2O.ai developing automated machine learning systems. He is a Kaggle triple master and self-driving car enthusiast.
  • Slides and video

A RapidMiner Overview by Steven Fouskarinis
November 14, 2020

  • RapidMiner is a platform that is used by over 4,000 universities to teach machine learning concepts to students while not presuming those students can program. However, the platform is also used by over 40,000 organizations worldwide for commercial applications. Come and take a brief but informative guided tour of the RapidMiner coal mines from a user not a salesperson.
  • Steven Fouskarinis is an electrical engineer and computer scientist by training who works on scaling execution at a company level. Past projects include using NLP to infer ICDs & CPTs from clinical notes, automating medical bill creation and submission, predicting infusion pump misuse, and using a chatbot as a non-visual interface for research paper discovery.
  • Slides and video

Genetics & Genomics by Chris Keown
June 20, 2020

  • Most people have a superficial understanding of genetics, as it plays a causal role in who we are. In the first half of this talk, I will explain what the field of genetics is, how we do experiments and analyses, and the recent learnings and challenges in the field. In the second half of my talk, I will focus on genomics and epigenomics. These are two very hot areas in research, but are lesser known to non-experts. Again, I will explain the field, how we do experiments and analyses, and how machine learning has been and can be applied in this domain.
  • Slides and video

Productizing ML by Ryan Chesler
June 13, 2020

  • A discussion how to make a product out of a machine learning idea. Walking briefly through all of the things that need to be considered while building a machine learning project, beyond just building models.
  • Ryan is an organizer of the San Diego Machine Learning meetup and a Data Scientist working for H2O.ai developing automated machine learning systems. He is a Kaggle triple master and has taken several systems to production all the way from idea to deployment.
  • Video

Intro to Text Summarization and Topic Segmentation by Vibhu Sapra
June 13, 2020

  • An introduction to text summarization and to different approaches towards topic segmentation. The talk will cover the basics of how machine summarization works, applications of text summarization, code examples, and explore current state of the art models.
  • Vibhu is the founder of Byrd.ai and has been developing various NLP models for text summarization in a production setting.
  • Video

Self-Driving Cars by Ryan Chesler
June 6, 2020

  • A discussion of the mechanisms and challenges involved in self-driving cars and what part machine learning plays in it.
  • Ryan is an organizer of the San Diego Machine Learning meetup and a Data Scientist working for H2O.ai developing automated machine learning systems. He is a Kaggle triple master and self-driving car enthusiast.
  • Video

A Beginner's Overview of Spark and PySpark by Ted Kyi
May 30, 2020

  • A discussion of what Spark is, why you would use it, as well as details, key features, code examples and how to learn more about it.
  • Ted is vice president of analytics for Matrix Medical Network. https://www.linkedin.com/in/tedkyi/
  • The slides and PySpark notebook are available in Ted's GitHub repo https://github.com/tedkyi/spark_talk, unfortunately video not avaialable

2019 and earlier

Prior to the global pandemic, we met in person and regrettably did not record any of our presentations.

Additional information

For SDML book club notes and videos, please see the book club repo.

To stay in touch with San Diego Machine Learning and receive announcements of all of our events, join our Meetup group https://www.meetup.com/San-Diego-Machine-Learning.

For more events, job postings, and discussion of machine learning, join our slack channel https://join.slack.com/t/sdmachinelearning/shared_invite/zt-2b2207qhg-Iyys1g0Ot6iErTYMioV9Mg



About

Presentations other than book club meetings

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published