Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 15e4c5d

Browse files
authored
Create Day 34 Random_Forest.md
1 parent b8de233 commit 15e4c5d

File tree

1 file changed

+85
-0
lines changed

1 file changed

+85
-0
lines changed

Code/Day 34 Random_Forest.md

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
# Random Forests
2+
<p align="center">
3+
<img src="https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Info-graphs/Day%2033.jpg">
4+
</p>
5+
6+
7+
### Importing the libraries
8+
```python
9+
import numpy as np
10+
import matplotlib.pyplot as plt
11+
import pandas as pd
12+
```
13+
14+
### Importing the dataset
15+
```python
16+
dataset = pd.read_csv('Social_Network_Ads.csv')
17+
X = dataset.iloc[:, [2, 3]].values
18+
y = dataset.iloc[:, 4].values
19+
```
20+
### Splitting the dataset into the Training set and Test set
21+
```python
22+
from sklearn.cross_validation import train_test_split
23+
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
24+
```
25+
26+
### Feature Scaling
27+
```python
28+
from sklearn.preprocessing import StandardScaler
29+
sc = StandardScaler()
30+
X_train = sc.fit_transform(X_train)
31+
X_test = sc.transform(X_test)
32+
```
33+
### Fitting Random Forest to the Training set
34+
```python
35+
from sklearn.ensemble import RandomForestClassifier
36+
classifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)
37+
classifier.fit(X_train, y_train)
38+
```
39+
### Predicting the Test set results
40+
```python
41+
y_pred = classifier.predict(X_test)
42+
```
43+
### Making the Confusion Matrix
44+
```python
45+
from sklearn.metrics import confusion_matrix
46+
cm = confusion_matrix(y_test, y_pred)
47+
```
48+
### Visualising the Training set results
49+
```python
50+
from matplotlib.colors import ListedColormap
51+
X_set, y_set = X_train, y_train
52+
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
53+
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
54+
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
55+
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
56+
plt.xlim(X1.min(), X1.max())
57+
plt.ylim(X2.min(), X2.max())
58+
for i, j in enumerate(np.unique(y_set)):
59+
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
60+
c = ListedColormap(('red', 'green'))(i), label = j)
61+
plt.title('Random Forest Classification (Training set)')
62+
plt.xlabel('Age')
63+
plt.ylabel('Estimated Salary')
64+
plt.legend()
65+
plt.show()
66+
```
67+
### Visualising the Test set results
68+
```python
69+
from matplotlib.colors import ListedColormap
70+
X_set, y_set = X_test, y_test
71+
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
72+
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
73+
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
74+
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
75+
plt.xlim(X1.min(), X1.max())
76+
plt.ylim(X2.min(), X2.max())
77+
for i, j in enumerate(np.unique(y_set)):
78+
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
79+
c = ListedColormap(('red', 'green'))(i), label = j)
80+
plt.title('Random Forest Classification (Test set)')
81+
plt.xlabel('Age')
82+
plt.ylabel('Estimated Salary')
83+
plt.legend()
84+
plt.show()
85+
```

0 commit comments

Comments
 (0)