|
75 | 75 | "cell_type": "markdown",
|
76 | 76 | "source": [
|
77 | 77 | "In this tutorial, we will use the `tf.estimator` API in TensorFlow to solve a\n",
|
78 |
| - "binary classification problem: Given census data about a person such as age,\n", |
79 |
| - "education, marital status, and occupation (the features), we will try to predict\n", |
80 |
| - "whether or not the person earns more than 50,000 dollars a year (the target\n", |
81 |
| - "label). We will train a **logistic regression** model, and given an individual's\n", |
82 |
| - "information our model will output a number between 0 and 1, which can be\n", |
83 |
| - "interpreted as the probability that the individual has an annual income of over\n", |
| 78 | + "standard benchmark binary classification problem: Given census data about a \n", |
| 79 | + "person such as age, education, marital status, and occupation (the features),\n", |
| 80 | + "we will try to predict whether or not the person earns more than 50,000 dollars\n", |
| 81 | + "a year (the target label). We will train a **logistic regression** model, and given \n", |
| 82 | + "an individual's information our model will output a number between 0 and 1, which\n", |
| 83 | + "can be interpreted as the probability that the individual has an annual income of over\n", |
84 | 84 | "50,000 dollars.\n",
|
85 | 85 | "\n",
|
| 86 | + "Key Point: As a modeler and developer, think about how this data is used and the potential benefits and harm a model's predictions can cause. A model like this could reinforce societal biases and disparities. Is each feature relevant to the problem you want to solve or will it introduce bias? For more information, read about [ML fairness](https://developers.google.com/machine-learning/fairness-overview/).\n", |
| 87 | + "\n", |
86 | 88 | "## Setup\n",
|
87 | 89 | "\n",
|
88 | 90 | "To try the code for this tutorial:\n",
|
|
316 | 318 | "execution_count": 0,
|
317 | 319 | "outputs": []
|
318 | 320 | },
|
319 |
| - { |
320 |
| - "metadata": { |
321 |
| - "id": "mLUJpWKoeCAE", |
322 |
| - "colab_type": "text" |
323 |
| - }, |
324 |
| - "cell_type": "markdown", |
325 |
| - "source": [ |
326 |
| - "Key Point: As a modeler and developer, think about how this data is used and the potential benefits and harm a model's predictions can cause. A model like this could reinforce societal biases and disparities. Is a feature relevant to the problem you want to solve or will it introduce bias? For more information, read about [ML fairness](https://developers.google.com/machine-learning/fairness-overview/)." |
327 |
| - ] |
328 |
| - }, |
329 | 321 | {
|
330 | 322 | "metadata": {
|
331 | 323 | "id": "QZZtXes4cYvf",
|
|
0 commit comments