Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 3ad3d80

Browse files
committed
update broken code links
1 parent 1d1b340 commit 3ad3d80

File tree

1 file changed

+14
-14
lines changed

1 file changed

+14
-14
lines changed

inception/README.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ download and convert the ImageNet data to native TFRecord format. The TFRecord
6767
format consists of a set of sharded files where each entry is a serialized
6868
`tf.Example` proto. Each `tf.Example` proto contains the ImageNet image (JPEG
6969
encoded) as well as metadata such as label and bounding box information. See
70-
[`parse_example_proto`](image_processing.py) for details.
70+
[`parse_example_proto`](inception/image_processing.py) for details.
7171

7272
We provide a single
7373
[script](inception/data/download_and_preprocess_imagenet.sh)
@@ -155,7 +155,7 @@ We have tested several hardware setups for training this model from scratch but
155155
we emphasize that depending your hardware set up, you may need to adapt the
156156
batch size and learning rate schedule.
157157

158-
Please see the comments in `inception_train.py` for a few selected learning rate
158+
Please see the comments in [`inception_train.py`](inception/inception_train.py) for a few selected learning rate
159159
plans based on some selected hardware setups.
160160

161161
To train this model, you simply need to specify the following:
@@ -281,7 +281,7 @@ prediction from the model matched the ImageNet label -- in this case, 73.5%.
281281
If you wish to run the eval just once and not periodically, append the
282282
`--run_once` option.
283283

284-
Much like the training script, `imagenet_eval.py` also
284+
Much like the training script, [`imagenet_eval.py`](inception/imagenet_eval.py) also
285285
exports summaries that may be visualized in TensorBoard. These summaries
286286
calculate additional statistics on the predictions (e.g. recall @ 5) as well
287287
as monitor the statistics of the model activations and weights during
@@ -303,7 +303,7 @@ There is a single automated script that downloads the data set and converts
303303
it to the TFRecord format. Much like the ImageNet data set, each record in the
304304
TFRecord format is a serialized `tf.Example` proto whose entries include
305305
a JPEG-encoded string and an integer label. Please see
306-
[`parse_example_proto`](image_processing.py) for details.
306+
[`parse_example_proto`](inception/image_processing.py) for details.
307307

308308
The script just takes a few minutes to run depending your network connection
309309
speed for downloading and processing the images. Your hard disk requires 200MB
@@ -333,10 +333,10 @@ files in the `DATA_DIR`. The files will match the patterns
333333
`train-????-of-00001` and `validation-?????-of-00001`, respectively.
334334

335335
**NOTE** If you wish to prepare a custom image data set for transfer learning,
336-
you will need to invoke [`build_image_data.py`](data/build_image_data.py)
336+
you will need to invoke [`build_image_data.py`](inception/data/build_image_data.py)
337337
on your custom data set.
338338
Please see the associated options and assumptions behind this script by reading
339-
the comments section of [`build_image_data.py`](data/build_image_data.py).
339+
the comments section of [`build_image_data.py`](inception/data/build_image_data.py).
340340

341341
The second piece you will need is a trained Inception v3 image model. You have
342342
the option of either training one yourself (See
@@ -390,7 +390,7 @@ if you wish to continue training a pre-trained model from a checkpoint. If you
390390
set this flag to true, you can train a new classification layer from scratch.
391391

392392
In order to understand how `--fine_tune` works, please see the discussion
393-
on `Variables` in the TensorFlow-Slim [`README.md`](slim/README.md).
393+
on `Variables` in the TensorFlow-Slim [`README.md`](inception/slim/README.md).
394394

395395
Putting this all together you can retrain a pre-trained Inception-v3 model
396396
on the flowers data set with the following command.
@@ -472,7 +472,7 @@ Succesfully loaded model from /tmp/flowers/model.ckpt-1999 at step=1999.
472472

473473
One can use the existing scripts supplied with this model to build a new
474474
dataset for training or fine-tuning. The main script to employ is
475-
[`build_image_data.py`](./build_image_data.py). Briefly,
475+
[`build_image_data.py`](inception/data/build_image_data.py). Briefly,
476476
this script takes a structured
477477
directory of images and converts it to a sharded `TFRecord` that can be read
478478
by the Inception model.
@@ -503,12 +503,12 @@ unique label for the images that reside within that sub-directory. The images
503503
may be JPEG or PNG images. We do not support other images types currently.
504504

505505
Once the data is arranged in this directory structure, we can run
506-
`build_image_data.py` on the data to generate the sharded `TFRecord` dataset.
506+
[`build_image_data.py`](inception/data/build_image_data.py) on the data to generate the sharded `TFRecord` dataset.
507507
Each entry of the `TFRecord` is a serialized `tf.Example` protocol buffer.
508508
A complete list of information contained in the `tf.Example` is described
509-
in the comments of `build_image_data.py`.
509+
in the comments of [`build_image_data.py`](inception/data/build_image_data.py).
510510

511-
To run `build_image_data.py`, you can run the following command line:
511+
To run [`build_image_data.py`](inception/data/build_image_data.py), you can run the following command line:
512512

513513
```shell
514514
# location to where to save the TFRecord data.
@@ -578,7 +578,7 @@ some general considerations for novices.
578578

579579
Roughly 5-10 hyper-parameters govern the speed at which a network is trained.
580580
In addition to `--batch_size` and `--num_gpus`, there are several constants
581-
defined in [inception_train.py](./inception_train.py) which dictate the
581+
defined in [inception_train.py](inception/inception_train.py) which dictate the
582582
learning schedule.
583583

584584
```shell
@@ -652,7 +652,7 @@ model architecture, this corresponds to 16GB of CPU memory. You may lower
652652
`input_queue_memory_factor` in order to decrease the memory footprint. Keep
653653
in mind though that lowering this value drastically may result in a model
654654
with slightly lower predictive accuracy when training from scratch. Please
655-
see comments in [`image_processing.py`](./image_processing.py) for more details.
655+
see comments in [`image_processing.py`](inception/image_processing.py) for more details.
656656

657657
## Troubleshooting
658658

@@ -693,7 +693,7 @@ the entire model architecture.
693693
We targeted a desktop with 128GB of CPU ram connected to 8 NVIDIA Tesla K40
694694
GPU cards but we have run this on desktops with 32GB of CPU ram and 1 NVIDIA
695695
Tesla K40. You can get a sense of the various training configurations we
696-
tested by reading the comments in [`inception_train.py`](./inception_train.py).
696+
tested by reading the comments in [`inception_train.py`](inception/inception_train.py).
697697

698698

699699

0 commit comments

Comments
 (0)