Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit d671948

Browse files
author
Fabian Pedregosa
committed
Changelog
1 parent 1c5d8aa commit d671948

File tree

9 files changed

+199
-38
lines changed

9 files changed

+199
-38
lines changed

AUTHORS.rst

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,14 +50,17 @@ People
5050

5151
* `Gael Varoquaux <http://gael-varoquaux.info/blog/>`_
5252

53-
* `Jake VanderPlas <http://www.astro.washington.edu/users/vanderplas/>`_
53+
* `Jake VanderPlas <http://www.astro.washington.edu/users/vanderplas/>`_
5454
contributed the BallTree module in February 2010.
5555

5656
* `Alexandre Gramfort
5757
<http://www-sop.inria.fr/members/Alexandre.Gramfort/index.fr.html>`_
5858

5959
* `Olivier Grisel <http://twitter.com/ogrisel>`_
6060

61+
* Bertrand Thirion, contributed the hierarchical clustering module together
62+
with Gael Varoquaux, Alexandre Gramfort and Vincent Michel.
63+
6164
* Vincent Michel.
6265

6366
* Chris Filo Gorgolewski
@@ -91,6 +94,13 @@ People
9194
* `Alexandre Passos <http://atpassos.posterous.com>`_ joined the
9295
project in November 2010 contributed the fast SVD variant.
9396

97+
* `Vlad Niculae <http://vene.ro>`_ joined the project in March 2011 and
98+
contributed the non-negative matrix factorization module.
99+
100+
101+
* Thouis (Ray) Jones joined the project in contributed the Cython bindings
102+
for the BallTree class.
103+
94104

95105
If I forgot anyone, do not hesitate to send me an email to
96106
[email protected] and I'll include you in the list.

doc/install.rst

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Installing an official release
3030
Installing from source
3131
----------------------
3232

33-
Installing from source requires you to have installed numpy,
33+
Installing from source requires you to have installed numpy,
3434
scipy, setuptools, python development headers and a working C++
3535
compiler. Under debian-like systems you can get all this by executing
3636
with root privileges::
@@ -117,7 +117,7 @@ executing the command::
117117
python setup.py install
118118

119119

120-
To build a precompiled package like the ones distributed at
120+
To build a precompiled package like the ones distributed at
121121
`the downloads section <https://sourceforge.net/projects/scikit-learn/files/>`_,
122122
the command to execute is::
123123

@@ -132,7 +132,7 @@ Third party distributions of scikits.learn
132132
==========================================
133133

134134
Some third-party distributions are now providing versions of
135-
scikits.learn integrated with their package-management systems.
135+
scikits.learn integrated with their package-management systems.
136136

137137
These can make installation and upgrading much easier for users since
138138
the integration includes the ability to automatically install
@@ -151,6 +151,14 @@ using the following commands with root privileges::
151151
apt-get install python-scikits-learn
152152

153153

154+
Python(x, y)
155+
------------
156+
157+
The `Python(x, y) <http://pythonxy.com>`_ distributes scikit-learn as an additional plugin, which can
158+
be found in the `Additional plugins <http://code.google.com/p/pythonxy/wiki/AdditionalPlugins>`_
159+
page.
160+
161+
154162
Enthought python distribution
155163
-----------------------------
156164

@@ -189,7 +197,7 @@ Testing
189197

190198
Testing requires having the `nose
191199
<http://somethingaboutorange.com/mrl/projects/nose/>`_ library. After
192-
installation, the package can be tested by executing from outside the
200+
installation, the package can be tested by executing *from outside* the
193201
source directory::
194202

195203
python -c "import scikits.learn as skl; skl.test()"
@@ -200,8 +208,9 @@ eventually should finish with the a text similar to::
200208
Ran 601 tests in 27.920s
201209
OK (SKIP=2)
202210

203-
otherwise please consider submitting a bug in the :ref:`bug_tracker`
204-
or to the :ref:`mailing_lists`.
211+
otherwise please consider posting an issue into the `bug tracker
212+
<https://github.com/scikit-learn/scikit-learn/issues>`_ or to the
213+
:ref:`mailing_lists`.
205214

206215
scikits.learn can also be tested without having the package
207216
installed. For this you must compile the sources inplace from the

doc/modules/classes.rst

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -339,6 +339,18 @@ Signal Decomposition
339339

340340
decomposition.fastica
341341

342+
343+
Linear Discriminant Analysis
344+
============================
345+
346+
.. autosummary::
347+
348+
:toctree: generated
349+
:template: class.rst
350+
351+
lda.LDA
352+
353+
342354
Cross Validation
343355
================
344356

doc/modules/clustering.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,8 @@ data can be found in the `labels_` attribute.
3131
long as a similarity measure exists for such objects.
3232

3333

34+
.. _k_means:
35+
3436
K-means
3537
=======
3638

doc/modules/covariance.rst

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,9 @@
44
Covariance estimation
55
===================================================
66

7+
.. currentmodule:: scikits.learn.covariance
8+
9+
710
Many statistical problems require at some point the estimation of a
811
population's covariance matrix, which can be seen as an estimation of
912
data set scatter plot shape. Most of the time, such an estimation has
@@ -20,7 +23,6 @@ observations are independant and identically distributed.
2023
Empirical covariance
2124
====================
2225

23-
.. currentmodule:: scikits.learn.covariance
2426

2527
The covariance matrix of a data set is known to be well approximated
2628
with the classical `Maximum Likelihood Estimator` (or `empirical
@@ -38,15 +40,14 @@ whether the data are centered or not, the result will be different, so
3840
one may want to use the `assume_centered` parameter accurately.
3941

4042
.. topic:: Examples:
41-
43+
4244
* See :ref:`example_covariance_plot_covariance_estimation.py` for
4345
an example on how to fit an :class:`EmpiricalCovariance` object
4446
to data.
4547

4648
Shrunk Covariance
4749
=================
4850

49-
.. curentmodule:: scikits.learn.covariance
5051

5152
Basic shrinkage
5253
---------------
@@ -77,7 +78,7 @@ whether the data are centered or not, the result will be different, so
7778
one may want to use the `assume_centered` parameter accurately.
7879

7980
.. topic:: Examples:
80-
81+
8182
* See :ref:`example_covariance_plot_covariance_estimation.py` for
8283
an example on how to fit a :class:`ShrunkCovariance` object
8384
to data.
@@ -101,7 +102,7 @@ fitting a :class:`LedoitWolf` object to the same sample.
101102
Volume 88, Issue 2, February 2004, pages 365-411.
102103

103104
.. topic:: Examples:
104-
105+
105106
* See :ref:`example_covariance_plot_covariance_estimation.py` for
106107
an example on how to fit a :class:`LedoitWolf` object to data and
107108
for visualizing the performances of the Ledoit-Wolf estimator in
@@ -112,6 +113,9 @@ fitting a :class:`LedoitWolf` object to the same sample.
112113
:align: center
113114
:scale: 75%
114115

116+
117+
.. _oracle_apprroximating_shrinkage:
118+
115119
Oracle Approximating Shrinkage
116120
------------------------------
117121

@@ -132,13 +136,13 @@ from the matlab programm available from the authors webpage
132136

133137
[2] "Shrinkage Algorithms for MMSE Covariance Estimation" Chen et al.,
134138
IEEE Trans. on Sign. Proc., Volume 58, Issue 10, October 2010.
135-
139+
136140
.. topic:: Examples:
137-
141+
138142
* See :ref:`example_covariance_plot_covariance_estimation.py` for
139143
an example on how to fit an :class:`OAS` object
140144
to data.
141-
145+
142146
* See :ref:`example_covariance_plot_lw_vs_oas.py` to visualize the
143147
Mean Squared Error difference between a :class:`LedoitWolf` and
144148
an :class:`OAS` estimator of the covariance.

doc/modules/datasets.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ Dataset generators
3939

4040
TODO
4141

42-
42+
.. _labeled_faces_in_the_wild:
4343

4444
The Labeled Faces in the Wild face recognition dataset
4545
======================================================

doc/modules/decomposition.rst

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -102,6 +102,8 @@ is not the exact inverse transform of `transform` even when
102102
<http://arxiv.org/abs/0909.4061>`_
103103
Halko, et al., 2009
104104

105+
.. _kernel_PCA:
106+
105107
Kernel PCA
106108
----------
107109

@@ -173,14 +175,14 @@ sparse components found by :class:`NMF` on the digits dataset.
173175

174176

175177
The :attr:`init` attribute determines the initialization method applied, which
176-
has a great impact on the performance of the method. :class:`NMF` implements
178+
has a great impact on the performance of the method. :class:`NMF` implements
177179
the method Nonnegative Double Singular Value Decomposition. NNDSVD is based on
178-
two SVD processes, one approximating the data matrix, the other approximating
179-
positive sections of the resulting partial SVD factors utilizing an algebraic
180+
two SVD processes, one approximating the data matrix, the other approximating
181+
positive sections of the resulting partial SVD factors utilizing an algebraic
180182
property of unit rank matrices. The basic NNDSVD algorithm is better fit for
181-
sparse factorization. Its variants NNDSVDa (in which all zeros are set equal to
182-
the mean of all elements of the data), and NNDSVDar (in which the zeros are set
183-
to random perturbations less than the mean of the data divided by 100) are
183+
sparse factorization. Its variants NNDSVDa (in which all zeros are set equal to
184+
the mean of all elements of the data), and NNDSVDar (in which the zeros are set
185+
to random perturbations less than the mean of the data divided by 100) are
184186
recommended in the dense case.
185187

186188
:class:`NMF` can also be initialized with random non-negative matrices, by
@@ -189,7 +191,7 @@ passing an integer seed or a `RandomState` to :attr:`init`.
189191
In :class:`NMF`, sparseness can be enforced by setting the attribute
190192
:attr:`sparseness` to `data` or `components`. Sparse components lead to
191193
localized features, and sparse data leads to a more efficient representation
192-
of the data.
194+
of the data.
193195

194196
.. topic:: Examples:
195197

@@ -212,4 +214,4 @@ of the data.
212214
* `"SVD based initialization: A head start for nonnegative
213215
matrix factorization"
214216
<http://www.cs.rpi.edu/~boutsc/files/nndsvd.pdf>`_
215-
C. Boutsidis, E. Gallopoulos, 2008
217+
C. Boutsidis, E. Gallopoulos, 2008

doc/support.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ IRC
2424
Some devs like to hang out on channel #learn on irc.freenode.net
2525

2626
If you do not have an irc client or are behind a firewall this web
27-
client works fine: http://webchat.freenode.net
27+
client works fine: http://webchat.freenode.net
2828

2929

3030
.. _documentation_resources:
@@ -36,6 +36,7 @@ This documentation is relative to |release|. Documentation for other
3636
versions can be found here:
3737

3838
* `Development version <http://scikit-learn.sf.net/dev/>`_
39+
* `0.8 <http://scikit-learn.sf.net/0.8/>`_
3940
* `0.7 <http://scikit-learn.sf.net/0.7/>`_
4041
* `0.6 <http://scikit-learn.sf.net/0.6/>`_
4142
* `0.5 <http://scikit-learn.sf.net/0.5/>`_

0 commit comments

Comments
 (0)