Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 02a30e2

Browse files
committed
Move Fairness note to second paragraph.
1 parent 18fc407 commit 02a30e2

File tree

1 file changed

+8
-16
lines changed

1 file changed

+8
-16
lines changed

samples/core/tutorials/estimators/wide.ipynb

Lines changed: 8 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -75,14 +75,16 @@
7575
"cell_type": "markdown",
7676
"source": [
7777
"In this tutorial, we will use the `tf.estimator` API in TensorFlow to solve a\n",
78-
"binary classification problem: Given census data about a person such as age,\n",
79-
"education, marital status, and occupation (the features), we will try to predict\n",
80-
"whether or not the person earns more than 50,000 dollars a year (the target\n",
81-
"label). We will train a **logistic regression** model, and given an individual's\n",
82-
"information our model will output a number between 0 and 1, which can be\n",
83-
"interpreted as the probability that the individual has an annual income of over\n",
78+
"standard benchmark binary classification problem: Given census data about a \n",
79+
"person such as age, education, marital status, and occupation (the features),\n",
80+
"we will try to predict whether or not the person earns more than 50,000 dollars\n",
81+
"a year (the target label). We will train a **logistic regression** model, and given \n",
82+
"an individual's information our model will output a number between 0 and 1, which\n",
83+
"can be interpreted as the probability that the individual has an annual income of over\n",
8484
"50,000 dollars.\n",
8585
"\n",
86+
"Key Point: As a modeler and developer, think about how this data is used and the potential benefits and harm a model's predictions can cause. A model like this could reinforce societal biases and disparities. Is each feature relevant to the problem you want to solve or will it introduce bias? For more information, read about [ML fairness](https://developers.google.com/machine-learning/fairness-overview/).\n",
87+
"\n",
8688
"## Setup\n",
8789
"\n",
8890
"To try the code for this tutorial:\n",
@@ -316,16 +318,6 @@
316318
"execution_count": 0,
317319
"outputs": []
318320
},
319-
{
320-
"metadata": {
321-
"id": "mLUJpWKoeCAE",
322-
"colab_type": "text"
323-
},
324-
"cell_type": "markdown",
325-
"source": [
326-
"Key Point: As a modeler and developer, think about how this data is used and the potential benefits and harm a model's predictions can cause. A model like this could reinforce societal biases and disparities. Is a feature relevant to the problem you want to solve or will it introduce bias? For more information, read about [ML fairness](https://developers.google.com/machine-learning/fairness-overview/)."
327-
]
328-
},
329321
{
330322
"metadata": {
331323
"id": "QZZtXes4cYvf",

0 commit comments

Comments
 (0)