DeepEarth is an AI model for the planet that fuses self-supervised, multi-modal, and spatio-temporal deep learning. The mission of DeepEarth is to solve global sustainability challenges (e.g. climate and biodiversity) through AI for scientists, engineers, and designers.
DeepEarth learns by jointly reconstructing masked multi-modal datasets (as seen above). It uses a novel space-time positional encoder, Earth4D, especially for earth observation data (as seen below).
-
November 17, 2025
99% parameter reduction, 4× speedup. Earth4D with learned hash probing tested on an ecological benchmark with only 5 million parameters yields near state-of-the-art accuracy with spectacular efficiency. See hyperparameter grid search. -
November 16, 2025
23% error reduction in space-time encoder. Lance Legel and Qin Huang implemented learned hash probing in Earth4D, achieving state-of-the-art R² on an ecological forecasting benchmark. See commit. -
October 29, 2025
Predicting risk of fires. Qin Huang, Brandon Voelker, and Lance Legel presented on simulating live fuel moisture content through NSF's Institute for Geospatial Understanding. See event. -
October 27, 2025
Battle-hardened (x, y, z, t) AI. For our spatio-temporal multi-resolution hash encoding, we've fixed a numerical bug in NVIDIA's CUDA kernels based on profiling of hash collisions. -
September 30, 2025
Presentation at top AI lab. Thanks to the Allen Institute for AI for hosting a 1 hour talk with scientists pioneering AI foundation models for the planet. See video and slides. -
August 8, 2025
NSF summer school program. NSF funded a week-long "Spatial AI for Disaster Resilience" summer school program in Boulder, Colorado. 5 PhD students researched and developed DeepEarth. See demos. -
June 23, 2025
Workshop in Chicago. NSF funded a 3 hour workshop on DeepEarth in Chicago for a "GeoAI for Sustainability" conference. 3 professors, 5 postdocs, and 2 PhD students contributed. See slides.
DeepEarth is a deep neural network that learns to answer classical Bayesian questions, e.g. "As variable α changes across space and time, how is variable β most likely to change, given all available evidence?"
Following a mathematical proof from Google DeepMind, DeepEarth learns the most probable statistical model for real world data across space and time. It learns across (x, y, z, t, energy) metrics, where energy can be any set of real-valued metrics ℝd.
A large number of DeepEarth models can be trained for diverse scientific domains: each model is trained by simply inputting domain-specific datasets, distributed across space and time. Deep inductive priors are automatically learned across all modalities.
DeepEarth models are trained as physical simulators of data observed across spacetime (e.g. predicting fire risk from historical data). Simulators can also be fine-tuned for specific applications, i.e. ChatGPT from GPT.
One of the great lessons from Einstein's relativity is that space and time are not independent variables. Following Grid4D, Earth4D extends NVIDIA's 3D multi-resolution hash encoding to learn spatio-temporal distributions.
Design and development of DeepEarth is led by award-winning scientists and engineers from Stanford University, University of Florida, and Ecodash.ai, along with one of the first engineers from Google DeepMind.
DeepEarth is a MIT-licensed open source project designed and built to solve planetary-scale problems 🌎, especially through AI-powered maximization of ecosystem services – e.g. for sustainable agriculture, environmental restoration, & ecological landscape design.
Collaborators welcomed! Contact Lance Legel at [email protected] or submit an issue/PR here.
For further details, see pre-print previews: