Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 1e6a1a7

Browse files
committed
update exercise format
1 parent 68d1fb9 commit 1e6a1a7

14 files changed

+289
-284
lines changed

advanced/apply_ufunc/automatic-vectorizing-numpy.ipynb

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -63,18 +63,17 @@
6363
" out[index, :] = np.interp(..., array[index, :], ...)\n",
6464
"```\n",
6565
"\n",
66-
"\n",
67-
"```{exercise}\n",
68-
":label: coreloopdims\n",
69-
"\n",
66+
"::::{admonition} Exercise\n",
67+
":class: tip\n",
7068
"Consider the example problem of interpolating a 2D array with dimensions `space` and `time` along the `time` dimension.\n",
7169
"Which dimension is the core dimension, and which is the \"loop dimension\"?\n",
72-
"```\n",
73-
"```{solution} coreloopdims\n",
70+
"\n",
71+
":::{admonition} Solution\n",
7472
":class: dropdown\n",
7573
"\n",
7674
"`time` is the core dimension, and `space` is the loop dimension.\n",
77-
"```\n",
75+
":::\n",
76+
"::::\n",
7877
"\n",
7978
"## Vectorization\n",
8079
"\n",

advanced/apply_ufunc/complex-output-numpy.ipynb

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -138,19 +138,20 @@
138138
"tags": []
139139
},
140140
"source": [
141-
"```{exercise}\n",
142-
":label: newdim\n",
141+
"::::{admonition} Exercise\n",
142+
":class: tip\n",
143143
"\n",
144144
"Apply the following function using `apply_ufunc`. It adds a new dimension to the input array, let's call it `newdim`. Specify the new dimension using `output_core_dims`. Do you need any `input_core_dims`?\n",
145145
"\n",
146146
"```python\n",
147147
"def add_new_dim(array):\n",
148148
" return np.expand_dims(array, axis=-1)\n",
149149
"```\n",
150-
"````{solution} newdim\n",
150+
"\n",
151+
":::{admonition} Solution\n",
151152
":class: dropdown\n",
152153
"\n",
153-
"``` python\n",
154+
"```python\n",
154155
"def add_new_dim(array):\n",
155156
" return np.expand_dims(array, axis=-1)\n",
156157
"\n",
@@ -161,7 +162,8 @@
161162
" output_core_dims=[[\"newdim\"]],\n",
162163
")\n",
163164
"```\n",
164-
"````"
165+
":::\n",
166+
"::::"
165167
]
166168
},
167169
{
@@ -327,8 +329,8 @@
327329
"tags": []
328330
},
329331
"source": [
330-
"````{exercise}\n",
331-
":label: generalize\n",
332+
"::::{admonition} Exercise\n",
333+
":class: tip\n",
332334
"\n",
333335
"We presented the concept of \"core dimensions\" as the \"smallest unit of data the function could handle.\" Do you understand how the above use of `apply_ufunc` generalizes to an array with more than one dimension? \n",
334336
"\n",
@@ -337,9 +339,8 @@
337339
"air3d = xr.tutorial.load_dataset(\"air_temperature\").air)\n",
338340
"``` \n",
339341
"Your goal is to have a minimum and maximum value of temperature across all latitudes for a given time and longitude.\n",
340-
"````\n",
341342
"\n",
342-
"````{solution} generalize\n",
343+
":::{admonition} Solution\n",
343344
":class: dropdown\n",
344345
"\n",
345346
"We want to use `minmax` to compute the minimum and maximum along the \"lat\" dimension always, regardless of how many dimensions are on the input. So we specify `input_core_dims=[[\"lat\"]]`. The output does not contain the \"lat\" dimension, but we expect two returned variables. So we pass an empty list `[]` for each returned array, so `output_core_dims=[[], []]` just as before.\n",
@@ -352,8 +353,8 @@
352353
" input_core_dims=[[\"lat\"]],\n",
353354
" output_core_dims=[[],[]],\n",
354355
")\n",
355-
"```\n",
356-
"````"
356+
":::\n",
357+
"::::"
357358
]
358359
}
359360
],

advanced/apply_ufunc/core-dimensions.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -335,13 +335,12 @@
335335
"tags": []
336336
},
337337
"source": [
338-
"```{exercise}\n",
339-
":label: trapezoid\n",
338+
"::::{admonition} Exercise\n",
339+
":class: tip\n",
340340
"\n",
341341
"Use `apply_ufunc` to apply `scipy.integrate.trapezoid` along the `time` axis.\n",
342-
"```\n",
343342
"\n",
344-
"````{solution} trapezoid\n",
343+
":::{admonition} Solution\n",
345344
":class: dropdown\n",
346345
"\n",
347346
"```python\n",
@@ -350,7 +349,8 @@
350349
"\n",
351350
"xr.apply_ufunc(scipy.integrate.trapezoid, ds, input_core_dims=[[\"time\"]], kwargs={\"axis\": -1})\n",
352351
"```\n",
353-
"````"
352+
":::\n",
353+
"::::"
354354
]
355355
}
356356
],

advanced/apply_ufunc/dask_apply_ufunc.ipynb

Lines changed: 14 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -267,15 +267,15 @@
267267
"Such functions involve the concept of \"core dimensions\". This concept is independent of the underlying array type, and is a property of the applied function. See the [core dimensions with NumPy](core-dimensions) tutorial for more.\n",
268268
"\n",
269269
"\n",
270-
"```{exercise}\n",
271-
":label: daskmean\n",
270+
"::::{admonition} Exercise\n",
271+
":class: tip\n",
272272
"\n",
273273
"Use `dask.array.mean` as an example of a function that can handle dask\n",
274274
"arrays and uses an `axis` kwarg. \n",
275-
"```\n",
276275
"\n",
277-
"````{solution} daskmean\n",
276+
":::{admonition} Solution\n",
278277
":class: dropdown\n",
278+
"\n",
279279
"```python\n",
280280
"def time_mean(da):\n",
281281
" return xr.apply_ufunc(\n",
@@ -285,10 +285,11 @@
285285
" dask=\"allowed\",\n",
286286
" kwargs={\"axis\": -1}, # core dimensions are moved to the end\n",
287287
" )\n",
288-
"\n",
289-
"\n",
288+
" \n",
290289
"time_mean(ds.air)\n",
291-
"````\n"
290+
"```\n",
291+
":::\n",
292+
"::::\n"
292293
]
293294
},
294295
{
@@ -493,12 +494,11 @@
493494
"tags": []
494495
},
495496
"source": [
496-
"```{exercise} \n",
497-
":label: rechunk\n",
497+
"::::{admonition} Exercise\n",
498+
":class: tip\n",
498499
"Apply the integrate function to `ds` after rechunking to have a different chunksize along `lon` using `ds.chunk(lon=4)` (for example). What happens?\n",
499-
"```\n",
500500
"\n",
501-
"```{solution} rechunk\n",
501+
":::{admonition} Solution\n",
502502
":class: dropdown\n",
503503
"\n",
504504
"`apply_ufunc` complains that it cannot automatically parallelize because the dataset `ds` is now chunked along the core dimension `lon`. You should see the following error:\n",
@@ -509,7 +509,8 @@
509509
" ``.chunk(dict(lon=-1))``, or pass ``allow_rechunk=True`` in ``dask_gufunc_kwargs`` \n",
510510
" but beware that this may significantly increase memory usage.\n",
511511
"\n",
512-
"```"
512+
":::\n",
513+
"::::"
513514
]
514515
},
515516
{
@@ -652,7 +653,7 @@
652653
"source": [
653654
"### Adding new dimensions\n",
654655
"\n",
655-
"We use the [expand_dims example](newdim) that changes the size of the input along a single dimension.\n",
656+
"We use the `np.expand_dims` to change the size of the input along a single dimension.\n",
656657
"\n",
657658
"```python\n",
658659
"def add_new_dim(array):\n",

advanced/apply_ufunc/numba-vectorization.ipynb

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@
77
"tags": []
88
},
99
"source": [
10-
"<img src=\"https://numba.pydata.org/_static/numba-blue-horizontal-rgb.svg\" width=\"40%\" align=\"right\">\n",
10+
"# Fast vectorization with Numba\n",
1111
"\n",
12-
"# Fast vectorization with Numba"
12+
"<img src=\"https://numba.pydata.org/_static/numba-blue-horizontal-rgb.svg\" width=\"40%\" align=\"right\">"
1313
]
1414
},
1515
{
@@ -241,12 +241,12 @@
241241
"id": "18",
242242
"metadata": {},
243243
"source": [
244-
"```{exercise}\n",
245-
":label: g\n",
244+
"::::{admonition} Exercise\n",
245+
":class: tip\n",
246246
"\n",
247247
"Apply `g` to `da_dask`\n",
248-
"```\n",
249-
"````{solution} g\n",
248+
"\n",
249+
":::{admonition} Solution\n",
250250
":class: dropdown\n",
251251
"\n",
252252
"```python\n",
@@ -259,7 +259,8 @@
259259
" dask=\"parallelized\",\n",
260260
")\n",
261261
"```\n",
262-
"````"
262+
":::\n",
263+
"::::"
263264
]
264265
},
265266
{

fundamentals/02.1_indexing_Basic.ipynb

Lines changed: 39 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -204,11 +204,6 @@
204204
"da[:, 20, 40]"
205205
]
206206
},
207-
{
208-
"cell_type": "markdown",
209-
"metadata": {},
210-
"source": []
211-
},
212207
{
213208
"cell_type": "markdown",
214209
"metadata": {},
@@ -581,6 +576,13 @@
581576
"```"
582577
]
583578
},
579+
{
580+
"cell_type": "code",
581+
"execution_count": null,
582+
"metadata": {},
583+
"outputs": [],
584+
"source": []
585+
},
584586
{
585587
"cell_type": "markdown",
586588
"metadata": {},
@@ -590,6 +592,13 @@
590592
"Another technique is to use a list of datetime objects or date strings for indexing. For example, you could select data for specific, non-contiguous dates like this:"
591593
]
592594
},
595+
{
596+
"cell_type": "code",
597+
"execution_count": null,
598+
"metadata": {},
599+
"outputs": [],
600+
"source": []
601+
},
593602
{
594603
"cell_type": "code",
595604
"execution_count": null,
@@ -602,6 +611,13 @@
602611
"ds.sel(time=dates)"
603612
]
604613
},
614+
{
615+
"cell_type": "code",
616+
"execution_count": null,
617+
"metadata": {},
618+
"outputs": [],
619+
"source": []
620+
},
605621
{
606622
"cell_type": "markdown",
607623
"metadata": {},
@@ -663,54 +679,55 @@
663679
"cell_type": "markdown",
664680
"metadata": {},
665681
"source": [
666-
"```{exercise}\n",
667-
":label: indexing-1\n",
682+
"::::{admonition} Exercise\n",
683+
":class: tip\n",
668684
"\n",
669685
"Select the first 30 entries of `latitude` and 30th to 40th entries of `longitude`:\n",
670-
"```\n",
671686
"\n",
672-
"````{solution} indexing-1\n",
687+
":::{admonition} Solution\n",
673688
":class: dropdown\n",
674689
"```python\n",
675690
"ds.isel(lat=slice(None, 30), lon=slice(30, 40))\n",
676691
"```\n",
677692
"\n",
678-
"````"
693+
":::\n",
694+
"::::"
679695
]
680696
},
681697
{
682698
"cell_type": "markdown",
683699
"metadata": {},
684700
"source": [
685-
"```{exercise}\n",
686-
":label: indexing-2\n",
701+
"::::{admonition} Exercise\n",
702+
":class: tip\n",
687703
"\n",
688-
"Select all data at 75 degree north and between Jan 1, 2013 and Oct 15, 2013 :\n",
689-
"```\n",
690-
"````{solution} indexing-2\n",
704+
"Select all data at 75 degree north and between Jan 1, 2013 and Oct 15, 2013\n",
705+
"\n",
706+
":::{admonition} Solution\n",
691707
":class: dropdown\n",
692708
"```python\n",
693709
"ds.sel(lat=75, time=slice(\"2013-01-01\", \"2013-10-15\"))\n",
694710
"```\n",
695-
"````"
711+
":::\n",
712+
"::::"
696713
]
697714
},
698715
{
699716
"cell_type": "markdown",
700717
"metadata": {},
701718
"source": [
702-
"```{exercise}\n",
703-
":label: indexing-3\n",
719+
"::::{admonition} Exercise\n",
720+
":class: tip\n",
704721
"\n",
705-
"Remove all entries at 260 and 270 degrees :\n",
722+
"Remove all entries at 260 and 270 degrees\n",
706723
"\n",
707-
"```\n",
708-
"````{solution} indexing-3\n",
724+
":::{admonition} Solution\n",
709725
":class: dropdown\n",
710726
"```python\n",
711727
"ds.drop_sel(lon=[260, 270])\n",
712728
"```\n",
713-
"````"
729+
":::\n",
730+
"::::"
714731
]
715732
},
716733
{

0 commit comments

Comments
 (0)