Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 18fc407

Browse files
committed
Fix review comments - part 2
1 parent 957ee97 commit 18fc407

File tree

1 file changed

+87
-19
lines changed

1 file changed

+87
-19
lines changed

samples/core/tutorials/estimators/wide.ipynb

Lines changed: 87 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -122,6 +122,19 @@
122122
" add the root directory to your python path, and jump to the `wide_deep` directory:"
123123
]
124124
},
125+
{
126+
"metadata": {
127+
"id": "tTwQzWcn8aBu",
128+
"colab_type": "code",
129+
"colab": {}
130+
},
131+
"cell_type": "code",
132+
"source": [
133+
"! git clone --depth 1 https://github.com/tensorflow/models"
134+
],
135+
"execution_count": 0,
136+
"outputs": []
137+
},
125138
{
126139
"metadata": {
127140
"id": "yVvFyhnkcYvL",
@@ -130,12 +143,9 @@
130143
},
131144
"cell_type": "code",
132145
"source": [
133-
"if \"wide_deep\" not in os.getcwd():\n",
134-
" ! git clone --depth 1 https://github.com/tensorflow/models\n",
135-
" models_path = os.path.join(os.getcwd(), 'models')\n",
136-
" sys.path.append(models_path) \n",
137-
" os.environ['PYTHONPATH'] += os.pathsep+models_path\n",
138-
" os.chdir(\"models/official/wide_deep\")"
146+
"models_path = os.path.join(os.getcwd(), 'models')\n",
147+
"\n",
148+
"sys.path.append(models_path)"
139149
],
140150
"execution_count": 0,
141151
"outputs": []
@@ -158,8 +168,8 @@
158168
},
159169
"cell_type": "code",
160170
"source": [
161-
"import census_dataset\n",
162-
"import census_main\n",
171+
"from official.wide_deep import census_dataset\n",
172+
"from official.wide_deep import census_main\n",
163173
"\n",
164174
"census_dataset.download(\"/tmp/census_data/\")"
165175
],
@@ -173,19 +183,65 @@
173183
},
174184
"cell_type": "markdown",
175185
"source": [
176-
"Execute the tutorial code with the following command to train the model described in this tutorial, from the command line:"
186+
"To execute the tutorial code from the command line first add the path to tensorflow/models to your `PYTHONPATH`."
187+
]
188+
},
189+
{
190+
"metadata": {
191+
"id": "DYOkY8boUptJ",
192+
"colab_type": "code",
193+
"colab": {}
194+
},
195+
"cell_type": "code",
196+
"source": [
197+
"#export PYTHONPATH=${PYTHONPATH}:\"$(pwd)/models\"\n",
198+
"os.environ['PYTHONPATH'] += os.pathsep+models_path"
199+
],
200+
"execution_count": 0,
201+
"outputs": []
202+
},
203+
{
204+
"metadata": {
205+
"id": "5r0V9YUMUyoh",
206+
"colab_type": "text"
207+
},
208+
"cell_type": "markdown",
209+
"source": [
210+
"Use `--help` to see what command line options are available: "
177211
]
178212
},
179213
{
180214
"metadata": {
181-
"id": "vbJ8jPAhcYvT",
215+
"id": "1_3tBaLW4YM4",
182216
"colab_type": "code",
183217
"colab": {}
184218
},
185219
"cell_type": "code",
186220
"source": [
187-
"output = !python -m census_main --model_type=wide --train_epochs=2\n",
188-
"print([line for line in output if 'accuracy:' in line])"
221+
"!python -m official.wide_deep.census_main --help"
222+
],
223+
"execution_count": 0,
224+
"outputs": []
225+
},
226+
{
227+
"metadata": {
228+
"id": "RrMLazEN6DMj",
229+
"colab_type": "text"
230+
},
231+
"cell_type": "markdown",
232+
"source": [
233+
"Now run the model:\n"
234+
]
235+
},
236+
{
237+
"metadata": {
238+
"id": "py7MarZl5Yh6",
239+
"colab_type": "code",
240+
"colab": {}
241+
},
242+
"cell_type": "code",
243+
"source": [
244+
"!python -m official.wide_deep.census_main --model_type=wide --train_epochs=2"
189245
],
190246
"execution_count": 0,
191247
"outputs": []
@@ -322,8 +378,7 @@
322378
"cell_type": "code",
323379
"source": [
324380
"def easy_input_function(df, label_key, num_epochs, shuffle, batch_size):\n",
325-
" df = df.copy()\n",
326-
" label = df.pop(label_key)\n",
381+
" label = df[label_key]\n",
327382
" ds = tf.data.Dataset.from_tensor_slices((dict(df),label))\n",
328383
"\n",
329384
" if shuffle:\n",
@@ -374,9 +429,9 @@
374429
"cell_type": "markdown",
375430
"source": [
376431
"But this approach has severly-limited scalability. For larger data it should be streamed off disk.\n",
377-
"the `census_dataset.input_fn` provides an example of how to do this using `tf.decode_csv` and `tf.data.TextLineDataset`: \n",
432+
"The `census_dataset.input_fn` provides an example of how to do this using `tf.decode_csv` and `tf.data.TextLineDataset`: \n",
378433
"\n",
379-
"TODO(markdaoust): This `input_fn` should use `tf.contrib.data.make_csv_dataset`"
434+
"<!-- TODO(markdaoust): This `input_fn` should use `tf.contrib.data.make_csv_dataset` -->"
380435
]
381436
},
382437
{
@@ -470,7 +525,7 @@
470525
"\n",
471526
"Estimators use a system called `feature_columns` to describe how the model\n",
472527
"should interpret each of the raw input features. An Estimator exepcts a vector\n",
473-
"of numeric inputs, and feature columns describe how the model shoukld convert\n",
528+
"of numeric inputs, and feature columns describe how the model should convert\n",
474529
"each feature.\n",
475530
"\n",
476531
"Selecting and crafting the right set of feature columns is key to learning an\n",
@@ -752,7 +807,7 @@
752807
},
753808
"cell_type": "markdown",
754809
"source": [
755-
"if we run `input_layer` with the hashed column we see that the output shape is `(batch_size, hash_bucket_size)`"
810+
"If we run `input_layer` with the hashed column we see that the output shape is `(batch_size, hash_bucket_size)`"
756811
]
757812
},
758813
{
@@ -1259,11 +1314,24 @@
12591314
"\n",
12601315
"For more about estimators:\n",
12611316
"\n",
1262-
"- The [TensorFlow Hub transfer-learning tutorial](https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub)\n",
1317+
"- The [TensorFlow Hub text classification tutorial](https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub) uses `hub.text_embedding_column` to easily ingest free form text. \n",
12631318
"- The [Gradient-boosted-trees estimator tutorial](https://github.com/tensorflow/models/tree/master/official/boosted_trees)\n",
12641319
"- This [blog post]( https://medium.com/tensorflow/classifying-text-with-tensorflow-estimators) on processing text with `Estimators`\n",
12651320
"- How to [build a custom CNN estimator](https://www.tensorflow.org/tutorials/estimators/cnn)"
12661321
]
1322+
},
1323+
{
1324+
"metadata": {
1325+
"id": "amMnupRPVtsa",
1326+
"colab_type": "code",
1327+
"colab": {}
1328+
},
1329+
"cell_type": "code",
1330+
"source": [
1331+
""
1332+
],
1333+
"execution_count": 0,
1334+
"outputs": []
12671335
}
12681336
]
12691337
}

0 commit comments

Comments
 (0)