-
-
Notifications
You must be signed in to change notification settings - Fork 26.3k
FIX delete feature_names_in_ when refitting on a ndarray #21389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
d19c40d
6994cf5
5c90d54
842a73b
dc82d4f
f610d7f
9e400c8
43904ff
fc44309
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -684,20 +684,6 @@ def _unnormalized_transform(self, X): | |||||
doc_topic_distr : ndarray of shape (n_samples, n_components) | ||||||
Document topic distribution for X. | ||||||
""" | ||||||
check_is_fitted(self) | ||||||
|
||||||
# make sure feature size is the same in fitted model and in X | ||||||
X = self._check_non_neg_array( | ||||||
X, reset_n_features=True, whom="LatentDirichletAllocation.transform" | ||||||
) | ||||||
n_samples, n_features = X.shape | ||||||
if n_features != self.components_.shape[1]: | ||||||
raise ValueError( | ||||||
"The provided data has %d dimensions while " | ||||||
"the model was trained with feature size %d." | ||||||
% (n_features, self.components_.shape[1]) | ||||||
) | ||||||
|
||||||
doc_topic_distr, _ = self._e_step(X, cal_sstats=False, random_init=False) | ||||||
|
||||||
return doc_topic_distr | ||||||
|
@@ -851,12 +837,6 @@ def _perplexity_precomp_distr(self, X, doc_topic_distr=None, sub_sampling=False) | |||||
score : float | ||||||
Perplexity score. | ||||||
""" | ||||||
check_is_fitted(self) | ||||||
|
||||||
X = self._check_non_neg_array( | ||||||
X, reset_n_features=True, whom="LatentDirichletAllocation.perplexity" | ||||||
) | ||||||
|
||||||
if doc_topic_distr is None: | ||||||
doc_topic_distr = self._unnormalized_transform(X) | ||||||
else: | ||||||
|
@@ -902,4 +882,8 @@ def perplexity(self, X, sub_sampling=False): | |||||
score : float | ||||||
Perplexity score. | ||||||
""" | ||||||
check_is_fitted(self) | ||||||
X = self._check_non_neg_array( | ||||||
X, reset_n_features=True, whom="LatentDirichletAllocation.perplexity" | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think we should reset the number of feature and their names when computing the perplexity of a dataset:
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It felt weird but I did not want to change the existing behavior. Do you think I should change it anyway ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think so, maybe with a small non-regression test. |
||||||
) | ||||||
return self._perplexity_precomp_distr(X, sub_sampling=sub_sampling) |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -648,19 +648,12 @@ def _fit( | |
): | ||
self._validate_params() | ||
if hasattr(self, "classes_"): | ||
self.classes_ = None | ||
|
||
X, y = self._validate_data( | ||
X, | ||
y, | ||
accept_sparse="csr", | ||
dtype=np.float64, | ||
order="C", | ||
accept_large_sparse=False, | ||
) | ||
# delete the attribute otherwise _partial_fit thinks it's not the first call | ||
delattr(self, "classes_") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this change really needed to fix the original problem? If so, it should probably be documented in the changelog. If not needed, I would rather move it outside of this PR. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's necessary because we now delegate the validation to _partial_fit which will reset n_features based on the existence of I added a comment There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To me it's a design issue of having fit calling partial_fit but I don't want to fix this in this PR :) |
||
|
||
# labels can be encoded as float, int, or string literals | ||
# np.unique sorts in asc order; largest class id is positive class | ||
y = self._validate_data(y=y) | ||
classes = np.unique(y) | ||
|
||
if self.warm_start and hasattr(self, "coef_"): | ||
|
Uh oh!
There was an error while loading. Please reload this page.