Easily load U.S. Climate Reference Network (USCRN) data.
With uscrn, fetching and loading years of data for all USCRN sites1 takes just one line of code2.
Example:
import uscrn
df = uscrn.get_data(2019, "hourly", n_jobs=6) # pandas.DataFrame
ds = uscrn.to_xarray(df) # xarray.Dataset, with soil depth dimension if applicable (hourly, daily)Both df (pandas) and ds (xarray) include dataset and variable metadata.
For df, these are in df.attrs and can be preserved by
writing to Parquet with the PyArrow engine3 with
pandas v2.1+.
df.to_parquet("uscrn_2019_hourly.parquet", engine="pyarrow")Conda install example4:
conda create -n crn -c conda-forge python=3.11 joblib numpy pandas pyyaml requests xarray pyarrow netcdf4
conda activate crn
pip install --no-deps uscrnFootnotes
-
Use
uscrn.load_meta()to load the site metadata table. ↩ -
Not counting the
importstatement... ↩ -
Or the fastparquet engine with fastparquet v2024.2.0+. ↩
-
uscrnis not yet on conda-forge. ↩