@@ -67,7 +67,7 @@ download and convert the ImageNet data to native TFRecord format. The TFRecord
67
67
format consists of a set of sharded files where each entry is a serialized
68
68
` tf.Example ` proto. Each ` tf.Example ` proto contains the ImageNet image (JPEG
69
69
encoded) as well as metadata such as label and bounding box information. See
70
- [ ` parse_example_proto ` ] ( image_processing.py ) for details.
70
+ [ ` parse_example_proto ` ] ( inception/ image_processing.py) for details.
71
71
72
72
We provide a single
73
73
[ script] ( inception/data/download_and_preprocess_imagenet.sh )
@@ -155,7 +155,7 @@ We have tested several hardware setups for training this model from scratch but
155
155
we emphasize that depending your hardware set up, you may need to adapt the
156
156
batch size and learning rate schedule.
157
157
158
- Please see the comments in ` inception_train.py ` for a few selected learning rate
158
+ Please see the comments in [ ` inception_train.py ` ] ( inception/inception_train.py ) for a few selected learning rate
159
159
plans based on some selected hardware setups.
160
160
161
161
To train this model, you simply need to specify the following:
@@ -281,7 +281,7 @@ prediction from the model matched the ImageNet label -- in this case, 73.5%.
281
281
If you wish to run the eval just once and not periodically, append the
282
282
` --run_once ` option.
283
283
284
- Much like the training script, ` imagenet_eval.py ` also
284
+ Much like the training script, [ ` imagenet_eval.py ` ] ( inception/imagenet_eval.py ) also
285
285
exports summaries that may be visualized in TensorBoard. These summaries
286
286
calculate additional statistics on the predictions (e.g. recall @ 5) as well
287
287
as monitor the statistics of the model activations and weights during
@@ -303,7 +303,7 @@ There is a single automated script that downloads the data set and converts
303
303
it to the TFRecord format. Much like the ImageNet data set, each record in the
304
304
TFRecord format is a serialized ` tf.Example ` proto whose entries include
305
305
a JPEG-encoded string and an integer label. Please see
306
- [ ` parse_example_proto ` ] ( image_processing.py ) for details.
306
+ [ ` parse_example_proto ` ] ( inception/ image_processing.py) for details.
307
307
308
308
The script just takes a few minutes to run depending your network connection
309
309
speed for downloading and processing the images. Your hard disk requires 200MB
@@ -333,10 +333,10 @@ files in the `DATA_DIR`. The files will match the patterns
333
333
` train-????-of-00001 ` and ` validation-?????-of-00001 ` , respectively.
334
334
335
335
** NOTE** If you wish to prepare a custom image data set for transfer learning,
336
- you will need to invoke [ ` build_image_data.py ` ] ( data/build_image_data.py )
336
+ you will need to invoke [ ` build_image_data.py ` ] ( inception/ data/build_image_data.py)
337
337
on your custom data set.
338
338
Please see the associated options and assumptions behind this script by reading
339
- the comments section of [ ` build_image_data.py ` ] ( data/build_image_data.py ) .
339
+ the comments section of [ ` build_image_data.py ` ] ( inception/ data/build_image_data.py) .
340
340
341
341
The second piece you will need is a trained Inception v3 image model. You have
342
342
the option of either training one yourself (See
@@ -390,7 +390,7 @@ if you wish to continue training a pre-trained model from a checkpoint. If you
390
390
set this flag to true, you can train a new classification layer from scratch.
391
391
392
392
In order to understand how ` --fine_tune ` works, please see the discussion
393
- on ` Variables ` in the TensorFlow-Slim [ ` README.md ` ] ( slim/README.md ) .
393
+ on ` Variables ` in the TensorFlow-Slim [ ` README.md ` ] ( inception/ slim/README.md) .
394
394
395
395
Putting this all together you can retrain a pre-trained Inception-v3 model
396
396
on the flowers data set with the following command.
@@ -472,7 +472,7 @@ Succesfully loaded model from /tmp/flowers/model.ckpt-1999 at step=1999.
472
472
473
473
One can use the existing scripts supplied with this model to build a new
474
474
dataset for training or fine-tuning. The main script to employ is
475
- [ ` build_image_data.py ` ] ( . /build_image_data.py) . Briefly,
475
+ [ ` build_image_data.py ` ] ( inception/data /build_image_data.py) . Briefly,
476
476
this script takes a structured
477
477
directory of images and converts it to a sharded ` TFRecord ` that can be read
478
478
by the Inception model.
@@ -503,12 +503,12 @@ unique label for the images that reside within that sub-directory. The images
503
503
may be JPEG or PNG images. We do not support other images types currently.
504
504
505
505
Once the data is arranged in this directory structure, we can run
506
- ` build_image_data.py ` on the data to generate the sharded ` TFRecord ` dataset.
506
+ [ ` build_image_data.py ` ] ( inception/data/build_image_data.py ) on the data to generate the sharded ` TFRecord ` dataset.
507
507
Each entry of the ` TFRecord ` is a serialized ` tf.Example ` protocol buffer.
508
508
A complete list of information contained in the ` tf.Example ` is described
509
- in the comments of ` build_image_data.py ` .
509
+ in the comments of [ ` build_image_data.py ` ] ( inception/data/build_image_data.py ) .
510
510
511
- To run ` build_image_data.py ` , you can run the following command line:
511
+ To run [ ` build_image_data.py ` ] ( inception/data/build_image_data.py ) , you can run the following command line:
512
512
513
513
``` shell
514
514
# location to where to save the TFRecord data.
@@ -578,7 +578,7 @@ some general considerations for novices.
578
578
579
579
Roughly 5-10 hyper-parameters govern the speed at which a network is trained.
580
580
In addition to ` --batch_size ` and ` --num_gpus ` , there are several constants
581
- defined in [ inception_train.py] ( . /inception_train.py) which dictate the
581
+ defined in [ inception_train.py] ( inception /inception_train.py) which dictate the
582
582
learning schedule.
583
583
584
584
``` shell
@@ -652,7 +652,7 @@ model architecture, this corresponds to 16GB of CPU memory. You may lower
652
652
` input_queue_memory_factor ` in order to decrease the memory footprint. Keep
653
653
in mind though that lowering this value drastically may result in a model
654
654
with slightly lower predictive accuracy when training from scratch. Please
655
- see comments in [ ` image_processing.py ` ] ( . /image_processing.py) for more details.
655
+ see comments in [ ` image_processing.py ` ] ( inception /image_processing.py) for more details.
656
656
657
657
## Troubleshooting
658
658
@@ -693,7 +693,7 @@ the entire model architecture.
693
693
We targeted a desktop with 128GB of CPU ram connected to 8 NVIDIA Tesla K40
694
694
GPU cards but we have run this on desktops with 32GB of CPU ram and 1 NVIDIA
695
695
Tesla K40. You can get a sense of the various training configurations we
696
- tested by reading the comments in [ ` inception_train.py ` ] ( . /inception_train.py) .
696
+ tested by reading the comments in [ ` inception_train.py ` ] ( inception /inception_train.py) .
697
697
698
698
699
699
0 commit comments