Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

XAheli
Copy link
Contributor

@XAheli XAheli commented Sep 12, 2025

Reference Issues/PRs

Fixes #8785

What does this implement/fix? Explain your changes.

Does your contribution introduce a new dependency? If yes, which one?

What should a reviewer concentrate their feedback on?

Did you add any tests for the change?

Any other comments?

FYI @marrov @ankurankan

@fkiraly fkiraly moved this to PR in progress in May - Sep 2025 mentee projects Sep 17, 2025
@fkiraly fkiraly moved this from PR in progress to PR under review in May - Sep 2025 mentee projects Sep 24, 2025
@XAheli XAheli marked this pull request as ready for review September 25, 2025 12:53
Copy link
Collaborator

@fkiraly fkiraly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to allow X to be None. If that happens, simply use only the endogenous forecaster, and ignore the rest of the algorithm. I think that is the natural, "degenerate" special case for this algorithm.

Also please try to ensure that check_estimator passes.

@jgyasu jgyasu moved this from PR under review to PR in progress in May - Sep 2025 mentee projects Sep 26, 2025
)

X_ex = X[self.exposure_vars].copy()
X_conf = X.drop(columns=self.exposure_vars).copy()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should be explicitly mentioned in the documentation that non-exposure variables will be considered as confounders. Treating non-confounding variables as confounders can lead to biased or high-variance predictions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please see the current code if it address this correctly

Comment on lines 243 to 248
# Step 4: Fit final versions for prediction
self.forecaster_y_final_ = clone(self.forecaster_y)
self.forecaster_y_final_.fit(y, X=X_conf, fh=fh)

self.forecaster_ex_final_ = clone(self.forecaster_ex)
self.forecaster_ex_final_.fit(X_ex, X=X_conf, fh=fh)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are these different from the models fitted earlier? Do we need to do this fitting twice?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are these different from the models fitted earlier? Do we need to do this fitting twice?

From my understanding (please correct if wrong), fiirst fitting is used to compute residuals for the causal effect estimation and second fitting is used for out-of-sample prediction components. This separation is needed because residual models need to predict on training indices (in sample) and final models need to predict on future indices (oos). Using the same fitted model for both can lead to overfitting issues ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed, in-short, I think its because of the forecasting horizon, first fit uses the insample as fh and second fit uses the user-provided fh.
But I would argue that for some forecasters (depending on the internal implementation) this could lead to 2 models with different weights

Comment on lines 297 to 298
# Combine intervals (assumes independence)
return pred_int_conf + pred_int_causal

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if this would be as straightforward as adding these two predictions. Depending on the estimator used, both of the quantities would have some posterior distribution, and the final value would depend on the distributions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you see the current implementation if that does it the right way? I tried to use some logic from ResidualBoostingForecaster, but it needs a review

Copy link
Collaborator

@fkiraly fkiraly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added review:

  • first fit of the residual forecasters should be in-sample, so fh should be the same as the y passed
  • the probabilistic foreceasting methods should use shifting of the probabilistic residual forecast by the point forecast of the original forecast, not adding the proba forecast. The logic for this is the same as in the ResidualForecaster, from which the utility could be reused (maybe move to a common location)

Please also address my review on X=None above.

@fkiraly fkiraly assigned geetu040 and unassigned XAheli Oct 13, 2025
@fkiraly fkiraly moved this from PR in progress to PR under review in May - Sep 2025 mentee projects Oct 13, 2025
@fkiraly fkiraly added the module:forecasting forecasting module: forecasting, incl probabilistic and hierarchical forecasting label Oct 14, 2025
@geetu040 geetu040 requested review from ankurankan, fkiraly and marrov and removed request for ankurankan October 14, 2025 16:42
@fkiraly fkiraly changed the title [ENH] Add DMLForecaster for causal forecasting with confounder adjustment [ENH] DMLForecaster for causal forecasting with confounder adjustment Oct 14, 2025
>>> fh = [1, 2, 3]
>>> dml_forecaster.fit(y_train, X=X_train, fh=fh)
DoubleMLForecaster(exposure_vars=['GNP'], outcome_forecaster=NaiveForecaster(),
residual_forecaster=RecursiveTabularRegressionForecaster(estimator=LinearRegression()),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this line not failing code formatting?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't really know, should they?
are you expecting the "line >80 chars" check failure? I don't think it fails for code blocks starting with ">>>"

3. Fit the *residual forecaster* on these residuals to estimate the causal
effect of the exposure on the outcome.

The residual forecaster is typically a simple linear model, ensuring
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"typically" is not precise, remove or rework

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed in 8d987a4

outcome and treatment. If None, defaults to a recursive reduction
forecaster wrapping a linear regression model.

exposure_vars : list of str, optional (default=None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, explain the default

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in 8d987a4

Copy link
Collaborator

@fkiraly fkiraly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Also extremely neat that it works with hierarchical forecasters!

A few small requests:

  • can we move the new file to a folder forecasting.causal?
  • can we add the exact algorithm from the issue to the docstring preamble?
  • question: can we think of a way to shorten the forecaster names?
  • there is some code duplication with ResidualBoostingForecaster should we try to deduplicate by moving common code to a common utility or mixin?

@fkiraly fkiraly moved this from PR under review to PR in progress in May - Sep 2025 mentee projects Oct 16, 2025
@geetu040
Copy link
Member

question: can we think of a way to shorten the forecaster names?

Yeah, I agree the names are a bit long, but I'd still lean toward keeping them for clarity and consistency with DML terminology.
If we do want to shorten them though, a few options could be:

  • outcome_forecasterout_forecaster or y_forecaster (old name)
  • treatment_forecastertreat_forecaster or ex_forecaster (old name)
  • residual_forecasterresid_forecaster or res_forecaster (old name)

We could also drop _forecaster entirely and use _model or _est instead if that feels cleaner.

@geetu040 geetu040 requested a review from fkiraly October 17, 2025 13:06
@fkiraly
Copy link
Collaborator

fkiraly commented Oct 17, 2025

question: can we think of a way to shorten the forecaster names?

Yeah, I agree the names are a bit long, but I'd still lean toward keeping them for clarity and consistency with DML terminology. If we do want to shorten them though, a few options could be:

* `outcome_forecaster` → `out_forecaster` or `y_forecaster` (old name)

* `treatment_forecaster` → `treat_forecaster` or `ex_forecaster` (old name)

* `residual_forecaster` → `resid_forecaster` or `res_forecaster` (old name)

We could also drop _forecaster entirely and use _model or _est instead if that feels cleaner.

Hm, I think "forecaster" is the redundant term, not "outcome" etc. How about: outcome_f, treatment_f, residual_f? Or fcst instead of f to avoid association with "function"?

@geetu040
Copy link
Member

This PR is ready for review, failing tests are unrelated. Please take a look at your convenience.
FYI: @fkiraly @ankurankan @marrov


**Fit procedure**

1. Split exogenous data ``X`` into exposure variables ``X_exposure`` and
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please make this more precise? Which forecaster, which fh; etc. Could you simply transfer my specification to here?

class DoubleMLForecaster(BaseForecaster):
"""Double Machine Learning forecaster for causal time-series forecasting.

Implements the Double Machine Learning (DML) framework [1]_ for time-series,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is not exactly the algorithm in [1], but an adaptation for time series (by myself) afaik

Copy link
Collaborator

@fkiraly fkiraly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me! Only minor documentation requests.

Copy link
Collaborator

@marrov marrov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall extremely solid PR - just have a couple of minor comments for edge cases and tags.

if X is None:
return None, None

# Check that all exposure variables exist in X
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

L265-L338: when the user supplies exposure_vars but none of those columns are present in the training X that goes into fit, we only warn and continue. As a consequence self.treatment_fcst_ never gets fitted. However, in predict _compute_X_exposure_res will still try to call self.treatment_fcst_.predict as soon as a future X does contain those exposure columns. That will probably blow up with a cryptic 'NoneType' object has no attribute 'predict' or something like that. Could we fail fast (raise) during fit if no exposure columns are available, or explicitly fall back to the pure outcome forecaster so predict remains well-defined?

self.treatment_fcst_ = clone(self.treatment_fcst)
self.treatment_fcst_.fit(y=X_exposure, X=X_confounder, fh=fh)

def _compute_X_exposure_res(self, X_exposure=None, X_confounder=None, fh=None):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

L339-L366: similar to the comment before but the other way round - if exposures were present at fit but are missing in the X passed to predict, _compute_X_exposure_res returns None and we pipe that straight into self.residual_fcst_.predict. The residual model was fitted with a concrete feature df, so most estimators will likely throw an opaque “X columns mismatch” error. Maybe this is ok and self explanatory, but maybe it isn't. It would be better to detect this early and raise a clear ValueError telling the user that the forecast X must contain the exposure columns the forecasters were trained on.

and self.treatment_fcst.get_tag("capability:categorical_in_X")
and self.residual_fcst.get_tag("capability:categorical_in_X")
)
req_fh = (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be an or instead. Right now requires-fh-in-fit is the logical AND of the three forecasters. If I plug in a DirectTabularRegressionForecaster (requires fh) as residual_fcst, but keep naive outcome/treatment models, _update_tags_from_components still leaves requires-fh-in-fit=False. fit happily runs with fh=None and we crash inside _fit when self.residual_fcst_.fit(..., fh=None) raises the error. Could we switch that aggregation to an OR so the requirement is surfaced early?

assumes there are no explicit exposure variables, using all features as
confounders and focusing purely on predictive residual correction
rather than causal effect estimation. In this case, the model behaves
equivalently to a ``ResidualBoostingForecaster``.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: This isn’t quite true, no? In that path the residual forecaster is trained with X=None, whereas ResidualBoostingForecaster would still feed the confounder (X) df through. I suggest rephrasing to describe the actual fallback instead of promising equivalence with ResidualBoostingForecaster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement Adding new functionality module:forecasting forecasting module: forecasting, incl probabilistic and hierarchical forecasting

Projects

Status: PR in progress

Development

Successfully merging this pull request may close these issues.

[ENH] Add DMLForecaster for causal forecasting with confounder adjustment

5 participants