Thanks to visit codestin.com
Credit goes to github.com

Skip to content

FEA Categorical split support for DecisionTree*, ExtraTree*, RandomForest* and `ExtraTrees* #29437

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 37 commits into
base: main
Choose a base branch
from

Conversation

adam2392
Copy link
Member

@adam2392 adam2392 commented Jul 8, 2024

Reference Issues/PRs

Supersedes #12866 #4899 and #7068 , #3346

What does this implement/fix? Explain your changes.

Implements splitting rules when categorical data is given in a decision tree classifier/regressor.

  • Tree data structures are moved to maintain the cimport/import sequence (ParentInfo, SplitRecord, Node) -> _utils.pxd
  • Adds partitioning of samples given a category (this complements nicely the partitioning of samples given a numerical threshold)
  • Adds splitting logic given a category
  • Adds a bitset cache (i.e. a uint64_t memory view) that allows us to compute the category split as we traverse the tree

Open Questions / TODO

  1. Should breiman_shortcut be part of Splitter? It is completely unused by RandomSplitter, so it seems weird to pass in an argument that is unnecessary. Otoh, if we specialize the breiman_shortcut only for BestSplitter, then we need to also specialize how breiman_shortcut is passed in BaseDecisionTree.
  2. Add unit-testing for splitting on categories vs one-hot-encoder -> shallower trees for 8 categories
  3. Understand the bitset operations and merge these with the HistGradientBoosting bitset operations
  4. benchmark experiments to determine splitting on categories vs not in terms of fit time, accuracy, etc.

Any other comments?

May be related to #24967

Would be nice to merge in #29458

Notes:

  1. breiman_shortcut should either be in only BestSplitter, or a special method of the Partitioner, or even a private function, which is optionally used. It's a private method for use during splitting, so maybe not part of Partitioning.
  2. I think we can enable support for Sparse categories in the next PR
  3. Perhaps information about categories should be passed from parent node since the Tree technically has information about the n_categories, so we can trim down the number of categories. I think this can be implemented last and then benchmarked to determine if this optimization is useful or not. I suspect it is since for categorical splits, there is time involved to "count the unique categories", which seems unnecessary. This is conceptually similar to tracking constant features

Signed-off-by: Adam Li <[email protected]>
Copy link

github-actions bot commented Jul 8, 2024

❌ Linting issues

This PR is introducing linting issues. Here's a summary of the issues. Note that you can avoid having linting issues by enabling pre-commit hooks. Instructions to enable them can be found here.

You can see the details of the linting issues under the lint job here


ruff format

ruff detected issues. Please run ruff format locally and push the changes. Here you can see the detected issues. Note that the installed ruff version is ruff=0.11.7.


--- sklearn/ensemble/tests/test_forest.py
+++ sklearn/ensemble/tests/test_forest.py
@@ -168,11 +168,12 @@
     reg = ForestRegressor(n_estimators=5, criterion=criterion, random_state=1)
     reg.fit(X_reg, y_reg)
     score = reg.score(X_reg, y_reg)
-    assert (
-        score > 0.93
-    ), "Failed with max_features=None, criterion %s and score = %f" % (
-        criterion,
-        score,
+    assert score > 0.93, (
+        "Failed with max_features=None, criterion %s and score = %f"
+        % (
+            criterion,
+            score,
+        )
     )
 
     reg = ForestRegressor(
@@ -1068,10 +1069,10 @@
         node_weights = np.bincount(out, weights=weights)
         # drop inner nodes
         leaf_weights = node_weights[node_weights != 0]
-        assert (
-            np.min(leaf_weights) >= total_weight * est.min_weight_fraction_leaf
-        ), "Failed with {0} min_weight_fraction_leaf={1}".format(
-            name, est.min_weight_fraction_leaf
+        assert np.min(leaf_weights) >= total_weight * est.min_weight_fraction_leaf, (
+            "Failed with {0} min_weight_fraction_leaf={1}".format(
+                name, est.min_weight_fraction_leaf
+            )
         )
 
 

--- sklearn/tree/_classes.py
+++ sklearn/tree/_classes.py
@@ -278,7 +278,7 @@
                     )
             if is_sparse_X and self.categorical_features is not None:
                 raise NotImplementedError(
-                    "Categorical features not supported" " with sparse inputs"
+                    "Categorical features not supported with sparse inputs"
                 )
 
             if self.criterion == "poisson":
@@ -643,7 +643,7 @@
                 raise ValueError("No support for np.int64 index based sparse matrices")
             if is_sparse_X and np.any(self.n_categories_ > 0):
                 raise NotImplementedError(
-                    "Categorical features not supported" " with sparse inputs"
+                    "Categorical features not supported with sparse inputs"
                 )
         else:
             # The number of features is checked regardless of `check_input`

--- sklearn/tree/tests/test_cython.py
+++ sklearn/tree/tests/test_cython.py
@@ -62,9 +62,9 @@
     expected_sorted = np.argsort(means)
 
     # Check that sorted_cat matches expected_sorted
-    assert np.all(
-        sorted_cat == expected_sorted
-    ), f"Expected {expected_sorted}, got {sorted_cat}"
+    assert np.all(sorted_cat == expected_sorted), (
+        f"Expected {expected_sorted}, got {sorted_cat}"
+    )
     print("Test passed! Sorted categories:", sorted_cat)
 
 
@@ -105,9 +105,9 @@
     expected_sorted_global = present_cats[expected_sorted_local]
 
     # Check that sorted_cat matches expected_sorted_global
-    assert np.all(
-        sorted_cat == expected_sorted_global
-    ), f"Expected {expected_sorted_global}, got {sorted_cat}"
+    assert np.all(sorted_cat == expected_sorted_global), (
+        f"Expected {expected_sorted_global}, got {sorted_cat}"
+    )
     print("Test passed! Sorted categories (global indices):", sorted_cat)
 
 
@@ -136,12 +136,12 @@
     tree_ohe.fit(X_ohe, y)
 
     # The categorical split should yield a shallower tree
-    assert (
-        tree_cat.get_depth() == 0
-    ), f"Categorical split depth should be 1, got {tree_cat.get_depth()}"
-    assert (
-        tree_ohe.get_depth() >= 3
-    ), f"One-hot tree depth should be at least 3, got {tree_ohe.get_depth()}"
+    assert tree_cat.get_depth() == 0, (
+        f"Categorical split depth should be 1, got {tree_cat.get_depth()}"
+    )
+    assert tree_ohe.get_depth() >= 3, (
+        f"One-hot tree depth should be at least 3, got {tree_ohe.get_depth()}"
+    )
 
 
 def test_weighted_classification_toy():

--- sklearn/tree/tests/test_monotonic_tree.py
+++ sklearn/tree/tests/test_monotonic_tree.py
@@ -80,9 +80,9 @@
     est.fit(X_train, y_train)
     proba_test = est.predict_proba(X_test)
 
-    assert np.logical_and(
-        proba_test >= 0.0, proba_test <= 1.0
-    ).all(), "Probability should always be in [0, 1] range."
+    assert np.logical_and(proba_test >= 0.0, proba_test <= 1.0).all(), (
+        "Probability should always be in [0, 1] range."
+    )
     assert_allclose(proba_test.sum(axis=1), 1.0)
 
     # Monotonic increase constraint, it applies to the positive class

--- sklearn/tree/tests/test_tree.py
+++ sklearn/tree/tests/test_tree.py
@@ -210,7 +210,6 @@
     return_tuple: bool,
     random_state: int,
 ):
-
     from sklearn.preprocessing import OneHotEncoder
 
     np.random.seed(random_state)
@@ -243,10 +242,10 @@
 
 
 def assert_tree_equal(d, s, message):
-    assert (
-        s.node_count == d.node_count
-    ), "{0}: inequal number of node ({1} != {2})".format(
-        message, s.node_count, d.node_count
+    assert s.node_count == d.node_count, (
+        "{0}: inequal number of node ({1} != {2})".format(
+            message, s.node_count, d.node_count
+        )
     )
 
     assert_array_equal(
@@ -375,9 +374,9 @@
     reg = Tree(criterion=criterion, random_state=0)
     reg.fit(diabetes.data, diabetes.target)
     score = mean_squared_error(diabetes.target, reg.predict(diabetes.data))
-    assert score == pytest.approx(
-        0
-    ), f"Failed with {name}, criterion = {criterion} and score = {score}"
+    assert score == pytest.approx(0), (
+        f"Failed with {name}, criterion = {criterion} and score = {score}"
+    )
 
 
 @skip_if_32bit
@@ -742,10 +741,10 @@
         node_weights = np.bincount(out, weights=weights)
         # drop inner nodes
         leaf_weights = node_weights[node_weights != 0]
-        assert (
-            np.min(leaf_weights) >= total_weight * est.min_weight_fraction_leaf
-        ), "Failed with {0} min_weight_fraction_leaf={1}".format(
-            name, est.min_weight_fraction_leaf
+        assert np.min(leaf_weights) >= total_weight * est.min_weight_fraction_leaf, (
+            "Failed with {0} min_weight_fraction_leaf={1}".format(
+                name, est.min_weight_fraction_leaf
+            )
         )
 
     # test case with no weights passed in
@@ -765,10 +764,10 @@
         node_weights = np.bincount(out)
         # drop inner nodes
         leaf_weights = node_weights[node_weights != 0]
-        assert (
-            np.min(leaf_weights) >= total_weight * est.min_weight_fraction_leaf
-        ), "Failed with {0} min_weight_fraction_leaf={1}".format(
-            name, est.min_weight_fraction_leaf
+        assert np.min(leaf_weights) >= total_weight * est.min_weight_fraction_leaf, (
+            "Failed with {0} min_weight_fraction_leaf={1}".format(
+                name, est.min_weight_fraction_leaf
+            )
         )
 
 
@@ -890,10 +889,10 @@
             (est3, 0.0001),
             (est4, 0.1),
         ):
-            assert (
-                est.min_impurity_decrease <= expected_decrease
-            ), "Failed, min_impurity_decrease = {0} > {1}".format(
-                est.min_impurity_decrease, expected_decrease
+            assert est.min_impurity_decrease <= expected_decrease, (
+                "Failed, min_impurity_decrease = {0} > {1}".format(
+                    est.min_impurity_decrease, expected_decrease
+                )
             )
             est.fit(X, y)
             for node in range(est.tree_.node_count):
@@ -924,10 +923,10 @@
                         imp_parent - wtd_avg_left_right_imp
                     )
 
-                    assert (
-                        actual_decrease >= expected_decrease
-                    ), "Failed with {0} expected min_impurity_decrease={1}".format(
-                        actual_decrease, expected_decrease
+                    assert actual_decrease >= expected_decrease, (
+                        "Failed with {0} expected min_impurity_decrease={1}".format(
+                            actual_decrease, expected_decrease
+                        )
                     )
 
 
@@ -968,9 +967,9 @@
         assert type(est2) == est.__class__
 
         score2 = est2.score(X, y)
-        assert (
-            score == score2
-        ), "Failed to generate same score  after pickling with {0}".format(name)
+        assert score == score2, (
+            "Failed to generate same score  after pickling with {0}".format(name)
+        )
         for attribute in fitted_attribute:
             assert_array_equal(
                 getattr(est2.tree_, attribute),
@@ -2663,9 +2662,9 @@
     # Check that the tree can learn the predictive feature
     # over an average of cross-validation fits.
     tree_cv_score = cross_val_score(tree, X, y, cv=5).mean()
-    assert (
-        tree_cv_score >= expected_score
-    ), f"Expected CV score: {expected_score} but got {tree_cv_score}"
+    assert tree_cv_score >= expected_score, (
+        f"Expected CV score: {expected_score} but got {tree_cv_score}"
+    )
 
 
 @pytest.mark.parametrize(
@@ -3015,9 +3014,9 @@
     tree_ohe.fit(X_ohe, y)
 
     # The categorical split should yield a shallower tree
-    assert (
-        tree_cat.get_depth() == 0
-    ), f"Categorical split depth should be 0, got {tree_cat.get_depth()}"
-    assert (
-        tree_ohe.get_depth() >= 3
-    ), f"One-hot tree depth should be at least 3, got {tree_ohe.get_depth()}"
+    assert tree_cat.get_depth() == 0, (
+        f"Categorical split depth should be 0, got {tree_cat.get_depth()}"
+    )
+    assert tree_ohe.get_depth() >= 3, (
+        f"One-hot tree depth should be at least 3, got {tree_ohe.get_depth()}"
+    )

5 files would be reformatted, 919 files already formatted

cython-lint

cython-lint detected issues. Please fix them locally and push the changes. Here you can see the detected issues. Note that the installed cython-lint version is cython-lint=0.16.6.


/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/_bitset.pyx:1:43: 'uint64_t' imported but unused
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/_bitset.pyx:54:41: E128 continuation line under-indented for visual indent
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/_bitset.pyx:111:21: E221 multiple spaces before operator
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/_bitset.pxd:19:1: W293 blank line contains whitespace
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_utils.pyx:520:17: 'idx' defined but unused (try prefixing with underscore?)
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_splitter.pyx:348:26: W291 trailing whitespace
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_splitter.pyx:356:1: W293 blank line contains whitespace
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_splitter.pyx:469:1: W293 blank line contains whitespace
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_splitter.pyx:529:108: W291 trailing whitespace
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_splitter.pyx:531:26: W291 trailing whitespace
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_splitter.pyx:744:19: 'split_seed' defined but unused (try prefixing with underscore?)
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_splitter.pyx:850:1: W293 blank line contains whitespace
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_partitioner.pxd:186:1: W293 blank line contains whitespace
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_partitioner.pxd:273:57: W292 no newline at end of file
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_tree.pyx:24:22: 'int32t_ptr_to_ndarray' imported but unused
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_tree.pyx:91:45: E262 inline comment should start with '# '
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_tree.pyx:92:45: E262 inline comment should start with '# '
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_tree.pyx:93:45: E262 inline comment should start with '# '
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_tree.pyx:95:48: E262 inline comment should start with '# '
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_tree.pyx:96:48: E262 inline comment should start with '# '
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_tree.pyx:97:48: E262 inline comment should start with '# '
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_tree.pyx:98:49: E262 inline comment should start with '# '
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_tree.pyx:971:47: W291 trailing whitespace
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_tree.pyx:972:25: E221 multiple spaces before operator
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_tree.pyx:974:48: W291 trailing whitespace
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_tree.pyx:975:26: E221 multiple spaces before operator
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_tree.pyx:2084:20: 'split_value' defined but unused (try prefixing with underscore?)
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_utils.pxd:42:40: E114 indentation is not a multiple of 4 (comment)
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_utils.pxd:42:40: E116 unexpected indentation (comment)
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_partitioner.pyx:74:1: W293 blank line contains whitespace
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_partitioner.pyx:450:26: 'feature_values' defined but unused (try prefixing with underscore?)
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_partitioner.pyx:528:31: E128 continuation line under-indented for visual indent
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_partitioner.pyx:529:31: E128 continuation line under-indented for visual indent
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_partitioner.pyx:530:31: E128 continuation line under-indented for visual indent
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_partitioner.pyx:532:35: E128 continuation line under-indented for visual indent
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_partitioner.pyx:534:1: E302 expected 2 blank lines, found 1
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_partitioner.pyx:1153:1: pointless string statement
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_partitioner.pyx:1168:35: E128 continuation line under-indented for visual indent
/home/runner/work/scikit-learn/scikit-learn/sklearn/tree/_partitioner.pyx:1172:40: W292 no newline at end of file

Generated for commit: c5f3127. Link to the linter CI: here

Signed-off-by: Adam Li <[email protected]>
adam2392 added 2 commits July 17, 2024 08:53
Signed-off-by: Adam Li <[email protected]>
Signed-off-by: Adam Li <[email protected]>
@adam2392 adam2392 mentioned this pull request Aug 22, 2024
12 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant