Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 79e9cd0

Browse files
committed
add eager API notebooks
1 parent 4c8c201 commit 79e9cd0

File tree

8 files changed

+1005
-9
lines changed

8 files changed

+1005
-9
lines changed

examples/2_BasicModels/linear_regression_eager_api.py

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
1-
'''
2-
A logistic regression learning algorithm example using TensorFlow library.
3-
This example is using the MNIST database of handwritten digits
4-
(http://yann.lecun.com/exdb/mnist/)
1+
''' Linear Regression with Eager API.
2+
3+
A linear regression learning algorithm example using TensorFlow's Eager API.
54
65
Author: Aymeric Damien
76
Project: https://github.com/aymericdamien/TensorFlow-Examples/

examples/2_BasicModels/logistic_regression_eager_api.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
1-
'''
2-
A logistic regression learning algorithm example using TensorFlow library.
1+
''' Logistic Regression with Eager API.
2+
3+
A logistic regression learning algorithm example using TensorFlow's Eager API.
34
This example is using the MNIST database of handwritten digits
45
(http://yann.lecun.com/exdb/mnist/)
56

examples/3_NeuralNetworks/neural_network_eager_api.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
""" Neural Network.
1+
""" Neural Network with Eager API.
22
33
A 2-Hidden Layers Fully Connected Neural Network (a.k.a Multilayer Perceptron)
4-
implementation with TensorFlow. This example is using the MNIST database
4+
implementation with TensorFlow's Eager API. This example is using the MNIST database
55
of handwritten digits (http://yann.lecun.com/exdb/mnist/).
66
77
This example is using TensorFlow layers, see 'neural_network_raw' example for
Lines changed: 238 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,238 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Basic introduction to TensorFlow's Eager API\n",
8+
"\n",
9+
"A simple introduction to get started with TensorFlow's Eager API.\n",
10+
"\n",
11+
"- Author: Aymeric Damien\n",
12+
"- Project: https://github.com/aymericdamien/TensorFlow-Examples/"
13+
]
14+
},
15+
{
16+
"cell_type": "markdown",
17+
"metadata": {},
18+
"source": [
19+
"### What is TensorFlow's Eager API ?\n",
20+
"\n",
21+
"*Eager execution is an imperative, define-by-run interface where operations are\n",
22+
"executed immediately as they are called from Python. This makes it easier to\n",
23+
"get started with TensorFlow, and can make research and development more\n",
24+
"intuitive. A vast majority of the TensorFlow API remains the same whether eager\n",
25+
"execution is enabled or not. As a result, the exact same code that constructs\n",
26+
"TensorFlow graphs (e.g. using the layers API) can be executed imperatively\n",
27+
"by using eager execution. Conversely, most models written with Eager enabled\n",
28+
"can be converted to a graph that can be further optimized and/or extracted\n",
29+
"for deployment in production without changing code. - Rajat Monga*\n",
30+
"\n",
31+
"More info: https://research.googleblog.com/2017/10/eager-execution-imperative-define-by.html"
32+
]
33+
},
34+
{
35+
"cell_type": "code",
36+
"execution_count": 1,
37+
"metadata": {
38+
"collapsed": true
39+
},
40+
"outputs": [],
41+
"source": [
42+
"from __future__ import absolute_import, division, print_function\n",
43+
"\n",
44+
"import numpy as np\n",
45+
"import tensorflow as tf\n",
46+
"import tensorflow.contrib.eager as tfe"
47+
]
48+
},
49+
{
50+
"cell_type": "code",
51+
"execution_count": 2,
52+
"metadata": {
53+
"collapsed": false
54+
},
55+
"outputs": [
56+
{
57+
"name": "stdout",
58+
"output_type": "stream",
59+
"text": [
60+
"Setting Eager mode...\n"
61+
]
62+
}
63+
],
64+
"source": [
65+
"# Set Eager API\n",
66+
"print(\"Setting Eager mode...\")\n",
67+
"tfe.enable_eager_execution()"
68+
]
69+
},
70+
{
71+
"cell_type": "code",
72+
"execution_count": 3,
73+
"metadata": {
74+
"collapsed": false
75+
},
76+
"outputs": [
77+
{
78+
"name": "stdout",
79+
"output_type": "stream",
80+
"text": [
81+
"Define constant tensors\n",
82+
"a = 2\n",
83+
"b = 3\n"
84+
]
85+
}
86+
],
87+
"source": [
88+
"# Define constant tensors\n",
89+
"print(\"Define constant tensors\")\n",
90+
"a = tf.constant(2)\n",
91+
"print(\"a = %i\" % a)\n",
92+
"b = tf.constant(3)\n",
93+
"print(\"b = %i\" % b)"
94+
]
95+
},
96+
{
97+
"cell_type": "code",
98+
"execution_count": 4,
99+
"metadata": {
100+
"collapsed": false
101+
},
102+
"outputs": [
103+
{
104+
"name": "stdout",
105+
"output_type": "stream",
106+
"text": [
107+
"Running operations, without tf.Session\n",
108+
"a + b = 5\n",
109+
"a * b = 6\n"
110+
]
111+
}
112+
],
113+
"source": [
114+
"# Run the operation without the need for tf.Session\n",
115+
"print(\"Running operations, without tf.Session\")\n",
116+
"c = a + b\n",
117+
"print(\"a + b = %i\" % c)\n",
118+
"d = a * b\n",
119+
"print(\"a * b = %i\" % d)"
120+
]
121+
},
122+
{
123+
"cell_type": "code",
124+
"execution_count": 5,
125+
"metadata": {
126+
"collapsed": false
127+
},
128+
"outputs": [
129+
{
130+
"name": "stdout",
131+
"output_type": "stream",
132+
"text": [
133+
"Mixing operations with Tensors and Numpy Arrays\n",
134+
"Tensor:\n",
135+
" a = tf.Tensor(\n",
136+
"[[2. 1.]\n",
137+
" [1. 0.]], shape=(2, 2), dtype=float32)\n",
138+
"NumpyArray:\n",
139+
" b = [[3. 0.]\n",
140+
" [5. 1.]]\n"
141+
]
142+
}
143+
],
144+
"source": [
145+
"# Full compatibility with Numpy\n",
146+
"print(\"Mixing operations with Tensors and Numpy Arrays\")\n",
147+
"\n",
148+
"# Define constant tensors\n",
149+
"a = tf.constant([[2., 1.],\n",
150+
" [1., 0.]], dtype=tf.float32)\n",
151+
"print(\"Tensor:\\n a = %s\" % a)\n",
152+
"b = np.array([[3., 0.],\n",
153+
" [5., 1.]], dtype=np.float32)\n",
154+
"print(\"NumpyArray:\\n b = %s\" % b)"
155+
]
156+
},
157+
{
158+
"cell_type": "code",
159+
"execution_count": 6,
160+
"metadata": {
161+
"collapsed": false
162+
},
163+
"outputs": [
164+
{
165+
"name": "stdout",
166+
"output_type": "stream",
167+
"text": [
168+
"Running operations, without tf.Session\n",
169+
"a + b = tf.Tensor(\n",
170+
"[[5. 1.]\n",
171+
" [6. 1.]], shape=(2, 2), dtype=float32)\n",
172+
"a * b = tf.Tensor(\n",
173+
"[[11. 1.]\n",
174+
" [ 3. 0.]], shape=(2, 2), dtype=float32)\n"
175+
]
176+
}
177+
],
178+
"source": [
179+
"# Run the operation without the need for tf.Session\n",
180+
"print(\"Running operations, without tf.Session\")\n",
181+
"\n",
182+
"c = a + b\n",
183+
"print(\"a + b = %s\" % c)\n",
184+
"\n",
185+
"d = tf.matmul(a, b)\n",
186+
"print(\"a * b = %s\" % d)"
187+
]
188+
},
189+
{
190+
"cell_type": "code",
191+
"execution_count": 7,
192+
"metadata": {
193+
"collapsed": false
194+
},
195+
"outputs": [
196+
{
197+
"name": "stdout",
198+
"output_type": "stream",
199+
"text": [
200+
"Iterate through Tensor 'a':\n",
201+
"tf.Tensor(2.0, shape=(), dtype=float32)\n",
202+
"tf.Tensor(1.0, shape=(), dtype=float32)\n",
203+
"tf.Tensor(1.0, shape=(), dtype=float32)\n",
204+
"tf.Tensor(0.0, shape=(), dtype=float32)\n"
205+
]
206+
}
207+
],
208+
"source": [
209+
"print(\"Iterate through Tensor 'a':\")\n",
210+
"for i in range(a.shape[0]):\n",
211+
" for j in range(a.shape[1]):\n",
212+
" print(a[i][j])"
213+
]
214+
}
215+
],
216+
"metadata": {
217+
"anaconda-cloud": {},
218+
"kernelspec": {
219+
"display_name": "Python [default]",
220+
"language": "python",
221+
"name": "python2"
222+
},
223+
"language_info": {
224+
"codemirror_mode": {
225+
"name": "ipython",
226+
"version": 2
227+
},
228+
"file_extension": ".py",
229+
"mimetype": "text/x-python",
230+
"name": "python",
231+
"nbconvert_exporter": "python",
232+
"pygments_lexer": "ipython2",
233+
"version": "2.7.12"
234+
}
235+
},
236+
"nbformat": 4,
237+
"nbformat_minor": 1
238+
}

notebooks/2_BasicModels/linear_regression_eager_api.ipynb

Lines changed: 181 additions & 0 deletions
Large diffs are not rendered by default.

notebooks/2_BasicModels/logistic_regression.ipynb

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,24 @@
99
"# Logistic Regression Example\n",
1010
"\n",
1111
"A logistic regression learning algorithm example using TensorFlow library.\n",
12-
"This example is using the MNIST database of handwritten digits (http://yann.lecun.com/exdb/mnist/)\n",
1312
"\n",
1413
"- Author: Aymeric Damien\n",
1514
"- Project: https://github.com/aymericdamien/TensorFlow-Examples/"
1615
]
1716
},
17+
{
18+
"cell_type": "markdown",
19+
"metadata": {},
20+
"source": [
21+
"## MNIST Dataset Overview\n",
22+
"\n",
23+
"This example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flattened and converted to a 1-D numpy array of 784 features (28*28).\n",
24+
"\n",
25+
"![MNIST Dataset](http://neuralnetworksanddeeplearning.com/images/mnist_100_digits.png)\n",
26+
"\n",
27+
"More info: http://yann.lecun.com/exdb/mnist/"
28+
]
29+
},
1830
{
1931
"cell_type": "code",
2032
"execution_count": 1,

0 commit comments

Comments
 (0)