|
1 | 1 | {
|
2 | 2 | "cells": [
|
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "metadata": {}, |
| 6 | + "source": [ |
| 7 | + "# Finding visual objects in images - Image Segmentation with tf.keras " |
| 8 | + ] |
| 9 | + }, |
| 10 | + { |
| 11 | + "cell_type": "markdown", |
| 12 | + "metadata": {}, |
| 13 | + "source": [ |
| 14 | + "<table class=\"tfo-notebook-buttons\" align=\"left\"><td>\n", |
| 15 | + "<a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb\">\n", |
| 16 | + " <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a> \n", |
| 17 | + "</td><td>\n", |
| 18 | + "<a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a></td></table>" |
| 19 | + ] |
| 20 | + }, |
3 | 21 | {
|
4 | 22 | "cell_type": "markdown",
|
5 | 23 | "metadata": {
|
6 | 24 | "colab_type": "text",
|
7 | 25 | "id": "cl79rk4KKol8"
|
8 | 26 | },
|
9 | 27 | "source": [
|
10 |
| - "# Finding visual objects in images - Image Segmentation with tf.keras \n", |
11 | 28 | "In this tutorial we will learn how to segment images. **Segmentation** is the process of generating pixel-wise segmentations giving the class of the object visible at each pixel. For example, we could be identifying the location and boundaries of people within an image or identifying cell nuclei from an image. Formally, image segmentation refers to the process of partitioning an image into a set of pixels that we desire to identify (our target) and the background. \n",
|
12 | 29 | "\n",
|
13 | 30 | "Specifically, in this tutorial we will be using the [Kaggle Carvana Image Masking Challenge Dataset](https://www.kaggle.com/c/carvana-image-masking-challenge). \n",
|
|
93 | 110 | "id": "RW9gk331S0KA"
|
94 | 111 | },
|
95 | 112 | "source": [
|
96 |
| - "# Get all the files " |
| 113 | + "# Get all the files \n", |
| 114 | + "Since this tutorial will be using a dataset from Kaggle, it requires [creating an API Token](https://github.com/Kaggle/kaggle-api) for your Kaggle acccount, and uploading it. " |
| 115 | + ] |
| 116 | + }, |
| 117 | + { |
| 118 | + "cell_type": "code", |
| 119 | + "execution_count": null, |
| 120 | + "metadata": {}, |
| 121 | + "outputs": [], |
| 122 | + "source": [ |
| 123 | + "import os\n", |
| 124 | + "\n", |
| 125 | + "# Upload the API token.\n", |
| 126 | + "def get_kaggle_credentials():\n", |
| 127 | + " token_dir = os.path.join(os.path.expanduser(\"~\"),\".kaggle\")\n", |
| 128 | + " token_file = os.path.join(token_dir, \"kaggle.json\")\n", |
| 129 | + " if not os.path.isdir(token_dir):\n", |
| 130 | + " os.mkdir(token_dir)\n", |
| 131 | + " try:\n", |
| 132 | + " with open(token_file,'r') as f:\n", |
| 133 | + " pass\n", |
| 134 | + " except IOError as no_file:\n", |
| 135 | + " try:\n", |
| 136 | + " from google.colab import files\n", |
| 137 | + " except ImportError:\n", |
| 138 | + " raise no_file\n", |
| 139 | + " \n", |
| 140 | + " uploaded = files.upload()\n", |
| 141 | + " with open(token_file, \"w\") as f:\n", |
| 142 | + " f.write(uploaded[\"kaggle.json\"])\n", |
| 143 | + " os.chmod(token_file, 600)\n", |
| 144 | + "\n", |
| 145 | + "get_kaggle_credentials()\n", |
| 146 | + "# Note: Only import kaggle after adding the credentials.\n", |
| 147 | + "import kaggle" |
97 | 148 | ]
|
98 | 149 | },
|
99 | 150 | {
|
|
0 commit comments