Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Added mean_absolute_percentage_error in metrics fixes #10708 #15007

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 120 commits into from
Jul 4, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
120 commits
Select commit Hold shift + click to select a range
eb70678
Added mean_absolute_percentage_error in metrics
ashutosh1919 Sep 18, 2019
0fca06e
Added mean_absolute_percentage_error in metrics
ashutosh1919 Sep 18, 2019
7191b88
Added mean_absolute_percentage_error in metrics
ashutosh1919 Sep 18, 2019
140afe2
Added MAPE
ashutosh1919 Sep 18, 2019
41c1bd1
Added MAPE
ashutosh1919 Sep 18, 2019
a401965
Added MAPE
ashutosh1919 Sep 18, 2019
39a7af0
Added MAPE
ashutosh1919 Sep 18, 2019
0aa9b53
Added MAPE
ashutosh1919 Sep 18, 2019
b8f5187
MAPE implementation changed
ashutosh1919 Sep 18, 2019
f83fdf4
MAPE implementation changed
ashutosh1919 Sep 18, 2019
6b2ead2
Removed Clip and applied np.maximum
ashutosh1919 Sep 19, 2019
2c7c8a5
MAPE Added in Docs
ashutosh1919 Sep 20, 2019
65afa12
Changed model_evaluation descriptions and other changes
ashutosh1919 Sep 24, 2019
99d080d
Resolving error
ashutosh1919 Sep 24, 2019
dc988ae
model_evaluation table changed
ashutosh1919 Sep 24, 2019
8274e28
model_evaluation table changed
ashutosh1919 Sep 24, 2019
647ec2c
model_evaluation table changed
ashutosh1919 Sep 24, 2019
4f7ea9c
Merge remote-tracking branch 'upstream/master'
ashutosh1919 Sep 25, 2019
832ac40
Merge branch 'master' of https://github.com/ashutosh1919/scikit-learn
ashutosh1919 Sep 25, 2019
0f53d4f
Changed Doc line
ashutosh1919 Sep 26, 2019
ffe95c9
Merge remote-tracking branch 'upstream/master'
ashutosh1919 Sep 26, 2019
da326fe
Sync with remote
ashutosh1919 Nov 17, 2019
cdb5d09
metrics init file changed
ashutosh1919 Nov 17, 2019
3ec0dd8
test_regression resolved
ashutosh1919 Nov 17, 2019
cf54616
test_regression resolved
ashutosh1919 Nov 17, 2019
65b14ce
Resolving Forecasting text
ashutosh1919 Dec 29, 2019
b4d7336
Render error
ashutosh1919 Dec 29, 2019
f9e6c01
Merge remote-tracking branch 'upstream/master'
ashutosh1919 Jan 6, 2020
76a4bf9
Resolving render errors
ashutosh1919 Jan 6, 2020
98bed82
Render doc error
ashutosh1919 Jan 6, 2020
9e0a347
Resolving render error
ashutosh1919 Jan 6, 2020
545bacf
Resolving render doc
ashutosh1919 Jan 6, 2020
ba0b63b
Resolving render doc
ashutosh1919 Jan 6, 2020
b731255
Resolving render doc
ashutosh1919 Jan 7, 2020
197f576
Update doc/modules/model_evaluation.rst
ashutosh1919 Jan 7, 2020
5869dda
Update sklearn/metrics/_regression.py
ashutosh1919 Jan 7, 2020
1adc635
Update sklearn/metrics/_regression.py
ashutosh1919 Jan 7, 2020
5f25b68
Applying suggested changes
ashutosh1919 Jan 7, 2020
b3eaca1
Merge remote-tracking branch 'upstream/master'
ashutosh1919 Jan 7, 2020
c28a2f5
Applying suggested changes
ashutosh1919 Jan 7, 2020
5444ac6
Applying suggested changes
ashutosh1919 Jan 7, 2020
5dbe07d
Applying suggested changes
ashutosh1919 Jan 7, 2020
90f7533
Applying suggested changes
ashutosh1919 Jan 7, 2020
5897097
Applying suggested changes
ashutosh1919 Jan 7, 2020
00665b4
Merge remote-tracking branch 'upstream/master'
ashutosh1919 Feb 7, 2020
3962b9a
Update doc/modules/model_evaluation.rst
ashutosh1919 Feb 7, 2020
aa8f6ec
Update doc/modules/model_evaluation.rst
ashutosh1919 Feb 7, 2020
fdc197f
Update doc/modules/model_evaluation.rst
ashutosh1919 Feb 7, 2020
94b6b5b
Update doc/modules/model_evaluation.rst
ashutosh1919 Feb 7, 2020
595a0c2
Update doc/modules/model_evaluation.rst
ashutosh1919 Feb 7, 2020
a64e5ff
Update doc/modules/model_evaluation.rst
ashutosh1919 Feb 7, 2020
1097911
Update doc/modules/model_evaluation.rst
ashutosh1919 Feb 7, 2020
9782f34
Doc Too long line error resolved
ashutosh1919 Feb 7, 2020
27be674
Merge remote-tracking branch 'upstream/master'
ashutosh1919 Feb 20, 2020
dbc2c4a
datatype changed and made compatible to y_true
ashutosh1919 Feb 20, 2020
e4cd050
Added scorer
ashutosh1919 Feb 20, 2020
c7a8b5f
eps datatype changed to np.float64
ashutosh1919 Feb 20, 2020
d406081
test_regression.py is changed to more meaningful test cases
ashutosh1919 Feb 20, 2020
5db5f5e
test_regression.py is changed to more meaningful test cases
ashutosh1919 Feb 20, 2020
f4cfc22
Resolving errors related to scorer tests in test_common.py and _score…
ashutosh1919 Feb 20, 2020
b681dcb
Updated v0.23.rst in whats_new
ashutosh1919 Feb 20, 2020
d4fcc39
resolving errors of mape scorer
ashutosh1919 Feb 20, 2020
9261cee
resolving errors of mape scorer
ashutosh1919 Feb 20, 2020
a161f79
resolving errors of mape scorer
ashutosh1919 Feb 20, 2020
d102f86
resolving errors of mape scorer
ashutosh1919 Feb 20, 2020
1970a30
modified test case in model_evaluation.rst
ashutosh1919 Feb 20, 2020
2b4128d
modified doc and code as per second batch comments
ashutosh1919 Feb 21, 2020
c6c7ba9
Resolving r2_scorer object error
ashutosh1919 Feb 21, 2020
2a59c17
Resolving r2_scorer object error
ashutosh1919 Feb 21, 2020
af63de8
Merge remote-tracking branch 'upstream/master'
ashutosh1919 Feb 21, 2020
67f857e
Resolving r2_scorer object error
ashutosh1919 Feb 21, 2020
cb0a635
Resolving r2_scorer object error
ashutosh1919 Feb 21, 2020
d104f2c
Conflict Resolved
ashutosh1919 Feb 24, 2020
a03affe
Added changes of conflict
ashutosh1919 Feb 24, 2020
f9fe4a4
Update sklearn/metrics/_regression.py
ashutosh1919 Feb 25, 2020
723f116
Update sklearn/metrics/_regression.py
ashutosh1919 Feb 25, 2020
156e633
Update sklearn/metrics/_regression.py
ashutosh1919 Feb 25, 2020
92af06b
Update sklearn/metrics/_regression.py
ashutosh1919 Feb 25, 2020
dd709a9
Modified Files to optimize changes
ashutosh1919 Feb 27, 2020
9151167
Merge remote-tracking branch 'upstream/master'
ashutosh1919 Feb 27, 2020
ea99cff
Modified Files to optimize changes
ashutosh1919 Feb 27, 2020
2c06403
Changed description of contributors info
ashutosh1919 Feb 27, 2020
019659d
Merge branch 'master' into master
ogrisel Mar 4, 2020
291720e
DOC improve diabetes dataset description (#16534)
maikia Mar 4, 2020
6578fb4
TST add test of fit attributes (#16286)
agramfort Mar 4, 2020
b91501e
ENH Minimal Generalized linear models implementation (L2 + lbfgs) (#1…
rth Mar 4, 2020
4905ac3
FIX Adress decomposition.PCA mle option problem (#16224)
lschwetlick Mar 4, 2020
fdbff6c
DOC add 0.22.2 in website news (#16631)
jeremiedbb Mar 4, 2020
43293a6
TST Enable california_housing pandas test in cron job (#16547)
rth Mar 4, 2020
0271b76
EXA align lorenz curves between the two examples with GLMs (#16640)
rth Mar 5, 2020
f7dfe4d
DOC update n_jobs description in DBSCAN (#16615)
adrinjalali Mar 5, 2020
2131504
FIX Pass sample_weight when predicting on stacked folds (#16539)
Mar 6, 2020
68d1fef
BLD Turns off memory_profiler in examples to fix CircleCI (#16629)
thomasjpfan Mar 9, 2020
a5a82ab
BLD Updates osx vm image in azure pipelines (#16647)
thomasjpfan Mar 9, 2020
9c17a60
FIX: normalizer l_inf should take maximum of absolute values (#16633)
maurapintor Mar 10, 2020
16f4208
ENH Add check for non binary variables in OneHotEncoder. (#16585)
cmarmo Mar 10, 2020
d7fbef0
DOC Update LICENSE Year (#16660)
merrcury Mar 10, 2020
8c8383b
BUG Fix issue with KernelPCA.inverse_transform (#16655)
lrjball Mar 10, 2020
895cb6a
[MRG] DOC Add example about interpretation of coefficients of linear …
cmarmo Mar 10, 2020
c464e92
MNT Remove unused imports (#16665)
alexhenrie Mar 10, 2020
72f39d9
MNT Restores behavior of conditioning on linting for most instances (…
thomasjpfan Mar 11, 2020
7cf9e1e
BUG Fixes HistGradientBoosting when warm_start is on + early_stopping…
thomasjpfan Mar 11, 2020
b2b4dbb
BUG fix the math issue in latex compilation (#16673)
glemaitre Mar 11, 2020
ba3fcdb
BUG remove $ math env due to latex error (#16674)
glemaitre Mar 11, 2020
864d028
DOC add example to tree.ExtraTreeClassifier (#16671)
nilichen Mar 11, 2020
dfdda83
PEP8 in test_encoders.py
ogrisel Mar 12, 2020
3686d55
MNT Removes unused private attributes (#16675)
thomasjpfan Mar 12, 2020
77fb39d
CI Check for unused imports when linting (#16678)
rth Mar 12, 2020
e087ea7
DOC wording in linear model interpretation example (#16680)
GaelVaroquaux Mar 13, 2020
140dae4
API make __init__ params in cross_decomposition kw-only (#16682)
adrinjalali Mar 13, 2020
dd437aa
DOC Adds example to OAS (#16681)
marenwestermann Mar 13, 2020
7ec9c61
DOC Add note on bias induced by dropping categories in OneHotE… (#16679)
ogrisel Mar 13, 2020
d3e7041
API make __init__ params in compose module kw-only (#16542)
adrinjalali Mar 13, 2020
a64758e
Merge remote-tracking branch 'upstream/master'
ashutosh1919 Mar 13, 2020
9b1cdc9
Removed x100 from MAPE and modified tests too
ashutosh1919 Mar 19, 2020
014a005
Merge remote-tracking branch 'upstream/master'
ashutosh1919 Mar 19, 2020
4aeb1dc
Merge remote-tracking branch 'upstream/master'
ashutosh1919 Apr 1, 2020
297ce25
Changed range
ashutosh1919 Apr 1, 2020
5a8476a
Merge remote-tracking branch 'upstream/master' into ashutosh1919-master
rth Jul 4, 2020
7b2c86f
DOC Add absolute_percentage_error to doc/modules/model_evaluation.rst
rth Jul 4, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion doc/modules/classes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -900,7 +900,7 @@ Miscellaneous
manifold.smacof
manifold.spectral_embedding
manifold.trustworthiness


.. _metrics_ref:

Expand Down Expand Up @@ -981,6 +981,7 @@ details.
metrics.mean_squared_error
metrics.mean_squared_log_error
metrics.median_absolute_error
metrics.mean_absolute_percentage_error
metrics.r2_score
metrics.mean_poisson_deviance
metrics.mean_gamma_deviance
Expand Down
117 changes: 77 additions & 40 deletions doc/modules/model_evaluation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,51 +54,52 @@ the model and the data, like :func:`metrics.mean_squared_error`, are
available as neg_mean_squared_error which return the negated value
of the metric.

============================== ============================================= ==================================
Scoring Function Comment
============================== ============================================= ==================================
==================================== ============================================== ==================================
Scoring Function Comment
==================================== ============================================== ==================================
**Classification**
'accuracy' :func:`metrics.accuracy_score`
'balanced_accuracy' :func:`metrics.balanced_accuracy_score`
'average_precision' :func:`metrics.average_precision_score`
'neg_brier_score' :func:`metrics.brier_score_loss`
'f1' :func:`metrics.f1_score` for binary targets
'f1_micro' :func:`metrics.f1_score` micro-averaged
'f1_macro' :func:`metrics.f1_score` macro-averaged
'f1_weighted' :func:`metrics.f1_score` weighted average
'f1_samples' :func:`metrics.f1_score` by multilabel sample
'neg_log_loss' :func:`metrics.log_loss` requires ``predict_proba`` support
'precision' etc. :func:`metrics.precision_score` suffixes apply as with 'f1'
'recall' etc. :func:`metrics.recall_score` suffixes apply as with 'f1'
'jaccard' etc. :func:`metrics.jaccard_score` suffixes apply as with 'f1'
'roc_auc' :func:`metrics.roc_auc_score`
'roc_auc_ovr' :func:`metrics.roc_auc_score`
'roc_auc_ovo' :func:`metrics.roc_auc_score`
'roc_auc_ovr_weighted' :func:`metrics.roc_auc_score`
'roc_auc_ovo_weighted' :func:`metrics.roc_auc_score`
'accuracy' :func:`metrics.accuracy_score`
'balanced_accuracy' :func:`metrics.balanced_accuracy_score`
'average_precision' :func:`metrics.average_precision_score`
'neg_brier_score' :func:`metrics.brier_score_loss`
'f1' :func:`metrics.f1_score` for binary targets
'f1_micro' :func:`metrics.f1_score` micro-averaged
'f1_macro' :func:`metrics.f1_score` macro-averaged
'f1_weighted' :func:`metrics.f1_score` weighted average
'f1_samples' :func:`metrics.f1_score` by multilabel sample
'neg_log_loss' :func:`metrics.log_loss` requires ``predict_proba`` support
'precision' etc. :func:`metrics.precision_score` suffixes apply as with 'f1'
'recall' etc. :func:`metrics.recall_score` suffixes apply as with 'f1'
'jaccard' etc. :func:`metrics.jaccard_score` suffixes apply as with 'f1'
'roc_auc' :func:`metrics.roc_auc_score`
'roc_auc_ovr' :func:`metrics.roc_auc_score`
'roc_auc_ovo' :func:`metrics.roc_auc_score`
'roc_auc_ovr_weighted' :func:`metrics.roc_auc_score`
'roc_auc_ovo_weighted' :func:`metrics.roc_auc_score`

**Clustering**
'adjusted_mutual_info_score' :func:`metrics.adjusted_mutual_info_score`
'adjusted_rand_score' :func:`metrics.adjusted_rand_score`
'completeness_score' :func:`metrics.completeness_score`
'fowlkes_mallows_score' :func:`metrics.fowlkes_mallows_score`
'homogeneity_score' :func:`metrics.homogeneity_score`
'mutual_info_score' :func:`metrics.mutual_info_score`
'normalized_mutual_info_score' :func:`metrics.normalized_mutual_info_score`
'v_measure_score' :func:`metrics.v_measure_score`
'adjusted_mutual_info_score' :func:`metrics.adjusted_mutual_info_score`
'adjusted_rand_score' :func:`metrics.adjusted_rand_score`
'completeness_score' :func:`metrics.completeness_score`
'fowlkes_mallows_score' :func:`metrics.fowlkes_mallows_score`
'homogeneity_score' :func:`metrics.homogeneity_score`
'mutual_info_score' :func:`metrics.mutual_info_score`
'normalized_mutual_info_score' :func:`metrics.normalized_mutual_info_score`
'v_measure_score' :func:`metrics.v_measure_score`

**Regression**
'explained_variance' :func:`metrics.explained_variance_score`
'max_error' :func:`metrics.max_error`
'neg_mean_absolute_error' :func:`metrics.mean_absolute_error`
'neg_mean_squared_error' :func:`metrics.mean_squared_error`
'neg_root_mean_squared_error' :func:`metrics.mean_squared_error`
'neg_mean_squared_log_error' :func:`metrics.mean_squared_log_error`
'neg_median_absolute_error' :func:`metrics.median_absolute_error`
'r2' :func:`metrics.r2_score`
'neg_mean_poisson_deviance' :func:`metrics.mean_poisson_deviance`
'neg_mean_gamma_deviance' :func:`metrics.mean_gamma_deviance`
============================== ============================================= ==================================
'explained_variance' :func:`metrics.explained_variance_score`
'max_error' :func:`metrics.max_error`
'neg_mean_absolute_error' :func:`metrics.mean_absolute_error`
'neg_mean_squared_error' :func:`metrics.mean_squared_error`
'neg_root_mean_squared_error' :func:`metrics.mean_squared_error`
'neg_mean_squared_log_error' :func:`metrics.mean_squared_log_error`
'neg_median_absolute_error' :func:`metrics.median_absolute_error`
'r2' :func:`metrics.r2_score`
'neg_mean_poisson_deviance' :func:`metrics.mean_poisson_deviance`
'neg_mean_gamma_deviance' :func:`metrics.mean_gamma_deviance`
'neg_mean_absolute_percentage_error' :func:`metrics.mean_absolute_percentage_error`
==================================== ============================================== ==================================


Usage examples:
Expand Down Expand Up @@ -1963,6 +1964,42 @@ function::
>>> mean_squared_log_error(y_true, y_pred)
0.044...

.. _mean_absolute_percentage_error:

Mean absolute percentage error
------------------------------
The :func:`mean_absolute_percentage_error` (MAPE), also known as mean absolute
percentage deviation (MAPD), is an evaluation metric for regression problems.
The idea of this metric is to be sensitive to relative errors. It is for example
not changed by a global scaling of the target variable.

If :math:`\hat{y}_i` is the predicted value of the :math:`i`-th sample
and :math:`y_i` is the corresponding true value, then the mean absolute percentage
error (MAPE) estimated over :math:`n_{\text{samples}}` is defined as

.. math::

\text{MAPE}(y, \hat{y}) = \frac{1}{n_{\text{samples}}} \sum_{i=0}^{n_{\text{samples}}-1} \frac{{}\left| y_i - \hat{y}_i \right|}{max(\epsilon, \left| y_i \right|)}

where :math:`\epsilon` is an arbitrary small yet strictly positive number to
avoid undefined results when y is zero.

The :func:`mean_absolute_percentage_error` function supports multioutput.

Here is a small example of usage of the :func:`mean_absolute_percentage_error`
function::

>>> from sklearn.metrics import mean_absolute_percentage_error
>>> y_true = [1, 10, 1e6]
>>> y_pred = [0.9, 15, 1.2e6]
>>> mean_absolute_percentage_error(y_true, y_pred)
0.2666...

In above example, if we had used `mean_absolute_error`, it would have ignored
the small magnitude values and only reflected the error in prediction of highest
magnitude value. But that problem is resolved in case of MAPE because it calculates
relative percentage error with respect to actual output.

.. _median_absolute_error:

Median absolute error
Expand Down
2 changes: 1 addition & 1 deletion doc/whats_new/_contributors.rst
Original file line number Diff line number Diff line change
Expand Up @@ -176,4 +176,4 @@

.. _Nicolas Hug: https://github.com/NicolasHug

.. _Guillaume Lemaitre: https://github.com/glemaitre
.. _Guillaume Lemaitre: https://github.com/glemaitre
6 changes: 6 additions & 0 deletions doc/whats_new/v0.24.rst
Original file line number Diff line number Diff line change
Expand Up @@ -150,6 +150,12 @@ Changelog
:mod:`sklearn.metrics`
......................

- |Feature| Added :func:`metrics.mean_absolute_percentage_error` metric and
the associated scorer for regression problems. :issue:`10708` fixed with the
PR :pr:`15007` by :user:`Ashutosh Hathidara <ashutosh1919>`. The scorer and
some practical test cases were taken from PR :pr:`10711` by
:user:`Mohamed Ali Jamaoui <mohamed-ali>`.

- |Fix| Fixed a bug in :func:`metrics.mean_squared_error` where the
average of multiple RMSE values was incorrectly calculated as the root of the
average of multiple MSE values.
Expand Down
2 changes: 2 additions & 0 deletions sklearn/metrics/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@
from ._regression import mean_squared_error
from ._regression import mean_squared_log_error
from ._regression import median_absolute_error
from ._regression import mean_absolute_percentage_error
from ._regression import r2_score
from ._regression import mean_tweedie_deviance
from ._regression import mean_poisson_deviance
Expand Down Expand Up @@ -128,6 +129,7 @@
'mean_gamma_deviance',
'mean_tweedie_deviance',
'median_absolute_error',
'mean_absolute_percentage_error',
'multilabel_confusion_matrix',
'mutual_info_score',
'ndcg_score',
Expand Down
77 changes: 77 additions & 0 deletions sklearn/metrics/_regression.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
# Michael Eickenberg <[email protected]>
# Konstantin Shmelkov <[email protected]>
# Christian Lorentzen <[email protected]>
# Ashutosh Hathidara <[email protected]>
# License: BSD 3 clause

import numpy as np
Expand All @@ -41,6 +42,7 @@
"mean_squared_error",
"mean_squared_log_error",
"median_absolute_error",
"mean_absolute_percentage_error",
"r2_score",
"explained_variance_score",
"mean_tweedie_deviance",
Expand Down Expand Up @@ -192,6 +194,81 @@ def mean_absolute_error(y_true, y_pred, *,
return np.average(output_errors, weights=multioutput)


def mean_absolute_percentage_error(y_true, y_pred,
sample_weight=None,
multioutput='uniform_average'):
"""Mean absolute percentage error regression loss

Note here that we do not represent the output as a percentage in range
[0, 100]. Instead, we represent it in range [0, 1/eps]. Read more in the
:ref:`User Guide <mean_absolute_percentage_error>`.

Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.

y_pred : array-like of shape (n_samples,) or (n_samples, n_outputs)
Estimated target values.

sample_weight : array-like of shape (n_samples,), default=None
Sample weights.

multioutput : {'raw_values', 'uniform_average'} or array-like
Defines aggregating of multiple output values.
Array-like value defines weights used to average errors.
If input is list then the shape must be (n_outputs,).

'raw_values' :
Returns a full set of errors in case of multioutput input.

'uniform_average' :
Errors of all outputs are averaged with uniform weight.

Returns
-------
loss : float or ndarray of floats in the range [0, 1/eps]
If multioutput is 'raw_values', then mean absolute percentage error
is returned for each output separately.
If multioutput is 'uniform_average' or an ndarray of weights, then the
weighted average of all output errors is returned.

MAPE output is non-negative floating point. The best value is 0.0.
But note the fact that bad predictions can lead to arbitarily large
MAPE values, especially if some y_true values are very close to zero.
Note that we return a large value instead of `inf` when y_true is zero.

Examples
--------
>>> from sklearn.metrics import mean_absolute_percentage_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_absolute_percentage_error(y_true, y_pred)
0.3273...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_absolute_percentage_error(y_true, y_pred)
0.5515...
>>> mean_absolute_percentage_error(y_true, y_pred, multioutput=[0.3, 0.7])
0.6198...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
check_consistent_length(y_true, y_pred, sample_weight)
epsilon = np.finfo(np.float64).eps
mape = np.abs(y_pred - y_true) / np.maximum(np.abs(y_true), epsilon)
output_errors = np.average(mape,
weights=sample_weight, axis=0)
if isinstance(multioutput, str):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None

return np.average(output_errors, weights=multioutput)


@_deprecate_positional_args
def mean_squared_error(y_true, y_pred, *,
sample_weight=None,
Expand Down
6 changes: 5 additions & 1 deletion sklearn/metrics/_scorer.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
f1_score, roc_auc_score, average_precision_score,
precision_score, recall_score, log_loss,
balanced_accuracy_score, explained_variance_score,
brier_score_loss, jaccard_score)
brier_score_loss, jaccard_score, mean_absolute_percentage_error)

from .cluster import adjusted_rand_score
from .cluster import homogeneity_score
Expand Down Expand Up @@ -614,6 +614,9 @@ def make_scorer(score_func, *, greater_is_better=True, needs_proba=False,
greater_is_better=False)
neg_mean_absolute_error_scorer = make_scorer(mean_absolute_error,
greater_is_better=False)
neg_mean_absolute_percentage_error_scorer = make_scorer(
mean_absolute_percentage_error, greater_is_better=False
)
neg_median_absolute_error_scorer = make_scorer(median_absolute_error,
greater_is_better=False)
neg_root_mean_squared_error_scorer = make_scorer(mean_squared_error,
Expand Down Expand Up @@ -674,6 +677,7 @@ def make_scorer(score_func, *, greater_is_better=True, needs_proba=False,
max_error=max_error_scorer,
neg_median_absolute_error=neg_median_absolute_error_scorer,
neg_mean_absolute_error=neg_mean_absolute_error_scorer,
neg_mean_absolute_percentage_error=neg_mean_absolute_percentage_error_scorer, # noqa
neg_mean_squared_error=neg_mean_squared_error_scorer,
neg_mean_squared_log_error=neg_mean_squared_log_error_scorer,
neg_root_mean_squared_error=neg_root_mean_squared_error_scorer,
Expand Down
16 changes: 13 additions & 3 deletions sklearn/metrics/tests/test_common.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@
from sklearn.metrics import max_error
from sklearn.metrics import matthews_corrcoef
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_absolute_percentage_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_tweedie_deviance
from sklearn.metrics import mean_poisson_deviance
Expand Down Expand Up @@ -98,6 +99,7 @@
"mean_absolute_error": mean_absolute_error,
"mean_squared_error": mean_squared_error,
"median_absolute_error": median_absolute_error,
"mean_absolute_percentage_error": mean_absolute_percentage_error,
"explained_variance_score": explained_variance_score,
"r2_score": partial(r2_score, multioutput='variance_weighted'),
"mean_normal_deviance": partial(mean_tweedie_deviance, power=0),
Expand Down Expand Up @@ -425,7 +427,7 @@ def precision_recall_curve_padded_thresholds(*args, **kwargs):
# Regression metrics with "multioutput-continuous" format support
MULTIOUTPUT_METRICS = {
"mean_absolute_error", "median_absolute_error", "mean_squared_error",
"r2_score", "explained_variance_score"
"r2_score", "explained_variance_score", "mean_absolute_percentage_error"
}

# Symmetric with respect to their input arguments y_true and y_pred
Expand Down Expand Up @@ -472,7 +474,7 @@ def precision_recall_curve_padded_thresholds(*args, **kwargs):
"macro_f0.5_score", "macro_f2_score", "macro_precision_score",
"macro_recall_score", "log_loss", "hinge_loss",
"mean_gamma_deviance", "mean_poisson_deviance",
"mean_compound_poisson_deviance"
"mean_compound_poisson_deviance", "mean_absolute_percentage_error"
}


Expand Down Expand Up @@ -1371,7 +1373,15 @@ def test_thresholded_multilabel_multioutput_permutations_invariance(name):
y_true_perm = y_true[:, perm]

current_score = metric(y_true_perm, y_score_perm)
assert_almost_equal(score, current_score)
if metric == mean_absolute_percentage_error:
assert np.isfinite(current_score)
assert current_score > 1e6
# Here we are not comparing the values in case of MAPE because
# whenever y_true value is exactly zero, the MAPE value doesn't
# signify anything. Thus, in this case we are just expecting
# very large finite value.
else:
assert_almost_equal(score, current_score)


@pytest.mark.parametrize(
Expand Down
Loading