Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 70b176a

Browse files
author
Jonathan Huang
authored
Merge pull request tensorflow#4764 from pkulzc/master
Adding new features to extend the functionality and capability of the API
2 parents 11070af + e2d4637 commit 70b176a

File tree

11 files changed

+743
-130
lines changed

11 files changed

+743
-130
lines changed

research/object_detection/README.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,10 @@ Extras:
7777
Run an instance segmentation model</a><br>
7878
* <a href='g3doc/challenge_evaluation.md'>
7979
Run the evaluation for the Open Images Challenge 2018</a><br>
80+
* <a href='g3doc/tpu_compatibility.md'>
81+
TPU compatible detection pipelines</a><br>
82+
* <a href='g3doc/running_on_mobile_tensorflowlite.md'>
83+
Running object detection on mobile devices with TensorFlow Lite</a><br>
8084

8185
## Getting Help
8286

@@ -95,6 +99,28 @@ reporting an issue.
9599

96100
## Release information
97101

102+
### July 13, 2018
103+
104+
There are many new updates in this release, extending the functionality and
105+
capability of the API:
106+
107+
* Moving from slim-based training to [Estimator](https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator)-based
108+
training.
109+
* Support for [RetinaNet](https://arxiv.org/abs/1708.02002), and a [MobileNet](https://ai.googleblog.com/2017/06/mobilenets-open-source-models-for.html)
110+
adaptation of RetinaNet.
111+
* A novel SSD-based architecture called the [Pooling Pyramid Network](https://arxiv.org/abs/1807.03284) (PPN).
112+
* Releasing several [TPU](https://cloud.google.com/tpu/)-compatible models.
113+
These can be found in the `samples/configs/` directory with a comment in the
114+
pipeline configuration files indicating TPU compatibility.
115+
* Support for quantized training.
116+
* Updated documentation for new binaries, Cloud training, and [Tensorflow Lite](https://www.tensorflow.org/mobile/tflite/).
117+
118+
See also our [expanded announcement blogpost](https://ai.googleblog.com/2018/07/accelerated-training-and-inference-with.html) and accompanying tutorial at the [TensorFlow blog](https://medium.com/tensorflow/training-and-serving-a-realtime-mobile-object-detector-in-30-minutes-with-cloud-tpus-b78971cf1193).
119+
120+
<b>Thanks to contributors</b>: Sara Robinson, Aakanksha Chowdhery, Derek Chow,
121+
Pengchong Jin, Jonathan Huang, Vivek Rathod, Zhichao Lu, Ronny Votel
122+
123+
98124
### June 25, 2018
99125

100126
Additional evaluation tools for the [Open Images Challenge 2018](https://storage.googleapis.com/openimages/web/challenge.html) are out.
Lines changed: 136 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,136 @@
1+
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
# #==========================================================================
15+
16+
FROM tensorflow/tensorflow:nightly-devel
17+
18+
# Get the tensorflow models research directory, and move it into tensorflow
19+
# source folder to match recommendation of installation
20+
RUN git clone --depth 1 https://github.com/tensorflow/models.git && \
21+
mv models /tensorflow/models
22+
23+
24+
# Install gcloud and gsutil commands
25+
# https://cloud.google.com/sdk/docs/quickstart-debian-ubuntu
26+
RUN export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
27+
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
28+
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
29+
apt-get update -y && apt-get install google-cloud-sdk -y
30+
31+
32+
# Install the Tensorflow Object Detection API from here
33+
# https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md
34+
35+
# Install object detection api dependencies
36+
RUN apt-get install -y protobuf-compiler python-pil python-lxml python-tk && \
37+
pip install Cython && \
38+
pip install contextlib2 && \
39+
pip install jupyter && \
40+
pip install matplotlib
41+
42+
# Install pycocoapi
43+
RUN git clone --depth 1 https://github.com/cocodataset/cocoapi.git && \
44+
cd cocoapi/PythonAPI && \
45+
make -j8 && \
46+
cp -r pycocotools /tensorflow/models/research && \
47+
cd ../../ && \
48+
rm -rf cocoapi
49+
50+
# Get protoc 3.0.0, rather than the old version already in the container
51+
RUN curl -OL "https://github.com/google/protobuf/releases/download/v3.0.0/protoc-3.0.0-linux-x86_64.zip" && \
52+
unzip protoc-3.0.0-linux-x86_64.zip -d proto3 && \
53+
mv proto3/bin/* /usr/local/bin && \
54+
mv proto3/include/* /usr/local/include && \
55+
rm -rf proto3 protoc-3.0.0-linux-x86_64.zip
56+
57+
# Run protoc on the object detection repo
58+
RUN cd /tensorflow/models/research && \
59+
protoc object_detection/protos/*.proto --python_out=.
60+
61+
# Set the PYTHONPATH to finish installing the API
62+
ENV PYTHONPATH $PYTHONPATH:/tensorflow/models/research:/tensorflow/models/research/slim
63+
64+
65+
# Install wget (to make life easier below) and editors (to allow people to edit
66+
# the files inside the container)
67+
RUN apt-get install -y wget vim emacs nano
68+
69+
70+
# Grab various data files which are used throughout the demo: dataset,
71+
# pretrained model, and pretrained TensorFlow Lite model. Install these all in
72+
# the same directories as recommended by the blog post.
73+
74+
# Pets example dataset
75+
RUN mkdir -p /tmp/pet_faces_tfrecord/ && \
76+
cd /tmp/pet_faces_tfrecord && \
77+
curl "http://download.tensorflow.org/models/object_detection/pet_faces_tfrecord.tar.gz" | tar xzf -
78+
79+
# Pretrained model
80+
# This one doesn't need its own directory, since it comes in a folder.
81+
RUN cd /tmp && \
82+
curl -O "http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03.tar.gz" && \
83+
tar xzf ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03.tar.gz && \
84+
rm ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03.tar.gz
85+
86+
# Trained TensorFlow Lite model. This should get replaced by one generated from
87+
# export_tflite_ssd_graph.py when that command is called.
88+
RUN cd /tmp && \
89+
curl -L -o tflite.zip \
90+
https://storage.googleapis.com/download.tensorflow.org/models/tflite/frozengraphs_ssd_mobilenet_v1_0.75_quant_pets_2018_06_29.zip && \
91+
unzip tflite.zip -d tflite && \
92+
rm tflite.zip
93+
94+
95+
# Install Android development tools
96+
# Inspired by the following sources:
97+
# https://github.com/bitrise-docker/android/blob/master/Dockerfile
98+
# https://github.com/reddit/docker-android-build/blob/master/Dockerfile
99+
100+
# Set environment variables
101+
ENV ANDROID_HOME /opt/android-sdk-linux
102+
ENV ANDROID_NDK_HOME /opt/android-ndk-r14b
103+
ENV PATH ${PATH}:${ANDROID_HOME}/tools:${ANDROID_HOME}/tools/bin:${ANDROID_HOME}/platform-tools
104+
105+
# Install SDK tools
106+
RUN cd /opt && \
107+
curl -OL https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip && \
108+
unzip sdk-tools-linux-4333796.zip -d ${ANDROID_HOME} && \
109+
rm sdk-tools-linux-4333796.zip
110+
111+
# Accept licenses before installing components, no need to echo y for each component
112+
# License is valid for all the standard components in versions installed from this file
113+
# Non-standard components: MIPS system images, preview versions, GDK (Google Glass) and Android Google TV require separate licenses, not accepted there
114+
RUN yes | sdkmanager --licenses
115+
116+
# Install platform tools, SDK platform, and other build tools
117+
RUN yes | sdkmanager \
118+
"tools" \
119+
"platform-tools" \
120+
"platforms;android-27" \
121+
"platforms;android-23" \
122+
"build-tools;27.0.3" \
123+
"build-tools;23.0.3"
124+
125+
# Install Android NDK (r14b)
126+
RUN cd /opt && \
127+
curl -L -o android-ndk.zip http://dl.google.com/android/repository/android-ndk-r14b-linux-x86_64.zip && \
128+
unzip -q android-ndk.zip && \
129+
rm -f android-ndk.zip
130+
131+
# Configure the build to use the things we just downloaded
132+
RUN cd /tensorflow && \
133+
printf '\n\nn\ny\nn\nn\nn\ny\nn\nn\nn\nn\nn\nn\n\ny\n%s\n\n\n' ${ANDROID_HOME}|./configure
134+
135+
136+
WORKDIR /tensorflow
Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
# Dockerfile for the TPU and TensorFlow Lite Object Detection tutorial
2+
3+
This Docker image automates the setup involved with training
4+
object detection models on Google Cloud and building the Android TensorFlow Lite
5+
demo app. We recommend using this container if you decide to work through our
6+
tutorial on ["Training and serving a real-time mobile object detector in
7+
30 minutes with Cloud TPUs"](https://medium.com/tensorflow/training-and-serving-a-realtime-mobile-object-detector-in-30-minutes-with-cloud-tpus-b78971cf1193), though of course it may be useful even if you would
8+
like to use the Object Detection API outside the context of the tutorial.
9+
10+
A couple words of warning:
11+
12+
1. Docker containers do not have persistent storage. This means that any changes
13+
you make to files inside the container will not persist if you restart
14+
the container. When running through the tutorial,
15+
**do not close the container**.
16+
2. To be able to deploy the [Android app](
17+
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/examples/android/app)
18+
(which you will build at the end of the tutorial),
19+
you will need to kill any instances of `adb` running on the host machine. You
20+
can accomplish this by closing all instances of Android Studio, and then
21+
running `adb kill-server`.
22+
23+
You can install Docker by following the [instructions here](
24+
https://docs.docker.com/install/).
25+
26+
## Running The Container
27+
28+
From this directory, build the Dockerfile as follows (this takes a while):
29+
30+
```
31+
docker build --tag detect-tf .
32+
```
33+
34+
Run the container:
35+
36+
```
37+
docker run --rm -it --privileged -p 6006:6006 detect-tf
38+
```
39+
40+
When running the container, you will find yourself inside the `/tensorflow`
41+
directory, which is the path to the TensorFlow [source
42+
tree](https://github.com/tensorflow/tensorflow).
43+
44+
## Text Editing
45+
46+
The tutorial also
47+
requires you to occasionally edit files inside the source tree.
48+
This Docker images comes with `vim`, `nano`, and `emacs` preinstalled for your
49+
convenience.
50+
51+
## What's In This Container
52+
53+
This container is derived from the nightly build of TensorFlow, and contains the
54+
sources for TensorFlow at `/tensorflow`, as well as the
55+
[TensorFlow Models](https://github.com/tensorflow/models) which are available at
56+
`/tensorflow/models` (and contain the Object Detection API as a subdirectory
57+
at `/tensorflow/models/research/object_detection`).
58+
The Oxford-IIIT Pets dataset, the COCO pre-trained SSD + MobileNet (v1)
59+
checkpoint, and example
60+
trained model are all available in `/tmp` in their respective folders.
61+
62+
This container also has the `gsutil` and `gcloud` utilities, the `bazel` build
63+
tool, and all dependencies necessary to use the Object Detection API, and
64+
compile and install the TensorFlow Lite Android demo app.
65+
66+
At various points throughout the tutorial, you may see references to the
67+
*research directory*. This refers to the `research` folder within the
68+
models repository, located at
69+
`/tensorflow/models/resesarch`.

research/object_detection/g3doc/detection_model_zoo.md

Lines changed: 18 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
11
# Tensorflow detection model zoo
22

33
We provide a collection of detection models pre-trained on the [COCO
4-
dataset](http://mscoco.org), the [Kitti dataset](http://www.cvlibs.net/datasets/kitti/), and the
5-
[Open Images dataset](https://github.com/openimages/dataset). These models can
4+
dataset](http://mscoco.org), the [Kitti dataset](http://www.cvlibs.net/datasets/kitti/),
5+
the [Open Images dataset](https://github.com/openimages/dataset) and the
6+
[AVA v2.1 dataset](https://research.google.com/ava/). These models can
67
be useful for
78
out-of-the-box inference if you are interested in categories already in COCO
89
(e.g., humans, cars, etc) or in Open Images (e.g.,
@@ -57,19 +58,26 @@ Some remarks on frozen inference graphs:
5758
a detector (and discarding the part past that point), which negatively impacts
5859
standard mAP metrics.
5960
* Our frozen inference graphs are generated using the
60-
[v1.5.0](https://github.com/tensorflow/tensorflow/tree/v1.5.0)
61+
[v1.8.0](https://github.com/tensorflow/tensorflow/tree/v1.8.0)
6162
release version of Tensorflow and we do not guarantee that these will work
6263
with other versions; this being said, each frozen inference graph can be
6364
regenerated using your current version of Tensorflow by re-running the
6465
[exporter](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/exporting_models.md),
65-
pointing it at the model directory as well as the config file inside of it.
66+
pointing it at the model directory as well as the corresponding config file in
67+
[samples/configs](https://github.com/tensorflow/models/tree/master/research/object_detection/samples/configs).
6668

6769

68-
## COCO-trained models {#coco-models}
70+
## COCO-trained models
6971

7072
| Model name | Speed (ms) | COCO mAP[^1] | Outputs |
7173
| ------------ | :--------------: | :--------------: | :-------------: |
7274
| [ssd_mobilenet_v1_coco](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2018_01_28.tar.gz) | 30 | 21 | Boxes |
75+
| [ssd_mobilenet_v1_0.75_depth_coco ☆](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03.tar.gz) | 26 | 18 | Boxes |
76+
| [ssd_mobilenet_v1_quantized_coco ☆](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_quantized_300x300_coco14_sync_2018_07_03.tar.gz) | 29 | 18 | Boxes |
77+
| [ssd_mobilenet_v1_0.75_depth_quantized_coco ☆](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_0.75_depth_quantized_300x300_coco14_sync_2018_07_03.tar.gz) | 29 | 16 | Boxes |
78+
| [ssd_mobilenet_v1_ppn_coco ☆](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_ppn_shared_box_predictor_300x300_coco14_sync_2018_07_03.tar.gz) | 26 | 20 | Boxes |
79+
| [ssd_mobilenet_v1_fpn_coco ☆](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz) | 56 | 32 | Boxes |
80+
| [ssd_resnet_50_fpn_coco ☆](http://download.tensorflow.org/models/object_detection/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz) | 76 | 35 | Boxes |
7381
| [ssd_mobilenet_v2_coco](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz) | 31 | 22 | Boxes |
7482
| [ssdlite_mobilenet_v2_coco](http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz) | 27 | 22 | Boxes |
7583
| [ssd_inception_v2_coco](http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2018_01_28.tar.gz) | 42 | 24 | Boxes |
@@ -88,29 +96,30 @@ Some remarks on frozen inference graphs:
8896
| [mask_rcnn_resnet101_atrous_coco](http://download.tensorflow.org/models/object_detection/mask_rcnn_resnet101_atrous_coco_2018_01_28.tar.gz) | 470 | 33 | Masks |
8997
| [mask_rcnn_resnet50_atrous_coco](http://download.tensorflow.org/models/object_detection/mask_rcnn_resnet50_atrous_coco_2018_01_28.tar.gz) | 343 | 29 | Masks |
9098

99+
Note: The asterisk (☆) at the end of model name indicates that this model supports TPU training.
91100

92-
93-
## Kitti-trained models {#kitti-models}
101+
## Kitti-trained models
94102

95103
Model name | Speed (ms) | Pascal [email protected] | Outputs
96104
----------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---: | :-------------: | :-----:
97105
[faster_rcnn_resnet101_kitti](http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_kitti_2018_01_28.tar.gz) | 79 | 87 | Boxes
98106

99-
## Open Images-trained models {#open-images-models}
107+
## Open Images-trained models
100108

101109
Model name | Speed (ms) | Open Images [email protected][^2] | Outputs
102110
----------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---: | :-------------: | :-----:
103111
[faster_rcnn_inception_resnet_v2_atrous_oid](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_oid_2018_01_28.tar.gz) | 727 | 37 | Boxes
104112
[faster_rcnn_inception_resnet_v2_atrous_lowproposals_oid](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_lowproposals_oid_2018_01_28.tar.gz) | 347 | | Boxes
105113

106114

107-
## AVA v2.1 trained models {#ava-models}
115+
## AVA v2.1 trained models
108116

109117
Model name | Speed (ms) | Pascal [email protected] | Outputs
110118
----------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---: | :-------------: | :-----:
111119
[faster_rcnn_resnet101_ava_v2.1](http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_ava_v2.1_2018_04_30.tar.gz) | 93 | 11 | Boxes
112120

113121

114122
[^1]: See [MSCOCO evaluation protocol](http://cocodataset.org/#detections-eval).
123+
115124
[^2]: This is PASCAL mAP with a slightly different way of true positives computation: see [Open Images evaluation protocol](evaluation_protocols.md#open-images).
116125

research/object_detection/g3doc/running_locally.md

Lines changed: 16 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -34,37 +34,22 @@ A local training job can be run with the following command:
3434

3535
```bash
3636
# From the tensorflow/models/research/ directory
37-
python object_detection/train.py \
38-
--logtostderr \
39-
--pipeline_config_path=${PATH_TO_YOUR_PIPELINE_CONFIG} \
40-
--train_dir=${PATH_TO_TRAIN_DIR}
37+
PIPELINE_CONFIG_PATH={path to pipeline config file}
38+
MODEL_DIR={path to model directory}
39+
NUM_TRAIN_STEPS=50000
40+
NUM_EVAL_STEPS=2000
41+
python object_detection/model_main.py \
42+
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \
43+
--model_dir=${MODEL_DIR} \
44+
--num_train_steps=${NUM_TRAIN_STEPS} \
45+
--num_eval_steps=${NUM_EVAL_STEPS} \
46+
--alsologtostderr
4147
```
4248

43-
where `${PATH_TO_YOUR_PIPELINE_CONFIG}` points to the pipeline config and
44-
`${PATH_TO_TRAIN_DIR}` points to the directory in which training checkpoints
45-
and events will be written to. By default, the training job will
46-
run indefinitely until the user kills it.
47-
48-
## Running the Evaluation Job
49-
50-
Evaluation is run as a separate job. The eval job will periodically poll the
51-
train directory for new checkpoints and evaluate them on a test dataset. The
52-
job can be run using the following command:
53-
54-
```bash
55-
# From the tensorflow/models/research/ directory
56-
python object_detection/eval.py \
57-
--logtostderr \
58-
--pipeline_config_path=${PATH_TO_YOUR_PIPELINE_CONFIG} \
59-
--checkpoint_dir=${PATH_TO_TRAIN_DIR} \
60-
--eval_dir=${PATH_TO_EVAL_DIR}
61-
```
62-
63-
where `${PATH_TO_YOUR_PIPELINE_CONFIG}` points to the pipeline config,
64-
`${PATH_TO_TRAIN_DIR}` points to the directory in which training checkpoints
65-
were saved (same as the training job) and `${PATH_TO_EVAL_DIR}` points to the
66-
directory in which evaluation events will be saved. As with the training job,
67-
the eval job run until terminated by default.
49+
where `${PIPELINE_CONFIG_PATH}` points to the pipeline config and
50+
`${MODEL_DIR}` points to the directory in which training checkpoints
51+
and events will be written to. Note that this binary will interleave both
52+
training and evaluation.
6853

6954
## Running Tensorboard
7055

@@ -73,9 +58,9 @@ using the recommended directory structure, Tensorboard can be run using the
7358
following command:
7459

7560
```bash
76-
tensorboard --logdir=${PATH_TO_MODEL_DIRECTORY}
61+
tensorboard --logdir=${MODEL_DIR}
7762
```
7863

79-
where `${PATH_TO_MODEL_DIRECTORY}` points to the directory that contains the
64+
where `${MODEL_DIR}` points to the directory that contains the
8065
train and eval directories. Please note it may take Tensorboard a couple minutes
8166
to populate with data.

0 commit comments

Comments
 (0)