Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 5d7fbc7

Browse files
committed
Remove mlab.cohere_pairs
1 parent e40dc94 commit 5d7fbc7

File tree

2 files changed

+3
-164
lines changed

2 files changed

+3
-164
lines changed

doc/api/next_api_changes/2018-09-18-DS.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,4 +4,6 @@ Removal of deprecated :mod:`matplotlib.mlab` code
44
Lots of code inside the :mod:`matplotlib.mlab` module which was deprecated
55
in Matplotlib 2.2 has been removed. See below for a list:
66

7-
- `mlab.logspace`
7+
- `mlab.logspace` (use numpy.logspace instead)
8+
- `mlab.cohere_pairs` (use scipy.signal.coherence instead)
9+
- `mlab.donothing_callback`

lib/matplotlib/mlab.py

Lines changed: 0 additions & 163 deletions
Original file line numberDiff line numberDiff line change
@@ -1194,169 +1194,6 @@ def cohere(x, y, NFFT=256, Fs=2, detrend=detrend_none, window=window_hanning,
11941194
return Cxy, f
11951195

11961196

1197-
@cbook.deprecated('2.2')
1198-
def donothing_callback(*args):
1199-
pass
1200-
1201-
1202-
@cbook.deprecated('2.2', 'scipy.signal.coherence')
1203-
def cohere_pairs(X, ij, NFFT=256, Fs=2, detrend=detrend_none,
1204-
window=window_hanning, noverlap=0,
1205-
preferSpeedOverMemory=True,
1206-
progressCallback=donothing_callback,
1207-
returnPxx=False):
1208-
1209-
"""
1210-
Compute the coherence and phase for all pairs *ij*, in *X*.
1211-
1212-
*X* is a *numSamples* * *numCols* array
1213-
1214-
*ij* is a list of tuples. Each tuple is a pair of indexes into
1215-
the columns of X for which you want to compute coherence. For
1216-
example, if *X* has 64 columns, and you want to compute all
1217-
nonredundant pairs, define *ij* as::
1218-
1219-
ij = []
1220-
for i in range(64):
1221-
for j in range(i+1,64):
1222-
ij.append( (i,j) )
1223-
1224-
*preferSpeedOverMemory* is an optional bool. Defaults to true. If
1225-
False, limits the caching by only making one, rather than two,
1226-
complex cache arrays. This is useful if memory becomes critical.
1227-
Even when *preferSpeedOverMemory* is False, :func:`cohere_pairs`
1228-
will still give significant performance gains over calling
1229-
:func:`cohere` for each pair, and will use subtantially less
1230-
memory than if *preferSpeedOverMemory* is True. In my tests with
1231-
a 43000,64 array over all nonredundant pairs,
1232-
*preferSpeedOverMemory* = True delivered a 33% performance boost
1233-
on a 1.7GHZ Athlon with 512MB RAM compared with
1234-
*preferSpeedOverMemory* = False. But both solutions were more
1235-
than 10x faster than naively crunching all possible pairs through
1236-
:func:`cohere`.
1237-
1238-
Returns
1239-
-------
1240-
Cxy : dictionary of (*i*, *j*) tuples -> coherence vector for
1241-
that pair. i.e., ``Cxy[(i,j) = cohere(X[:,i], X[:,j])``.
1242-
Number of dictionary keys is ``len(ij)``.
1243-
1244-
Phase : dictionary of phases of the cross spectral density at
1245-
each frequency for each pair. Keys are (*i*, *j*).
1246-
1247-
freqs : vector of frequencies, equal in length to either the
1248-
coherence or phase vectors for any (*i*, *j*) key.
1249-
1250-
e.g., to make a coherence Bode plot::
1251-
1252-
subplot(211)
1253-
plot( freqs, Cxy[(12,19)])
1254-
subplot(212)
1255-
plot( freqs, Phase[(12,19)])
1256-
1257-
For a large number of pairs, :func:`cohere_pairs` can be much more
1258-
efficient than just calling :func:`cohere` for each pair, because
1259-
it caches most of the intensive computations. If :math:`N` is the
1260-
number of pairs, this function is :math:`O(N)` for most of the
1261-
heavy lifting, whereas calling cohere for each pair is
1262-
:math:`O(N^2)`. However, because of the caching, it is also more
1263-
memory intensive, making 2 additional complex arrays with
1264-
approximately the same number of elements as *X*.
1265-
1266-
See :file:`test/cohere_pairs_test.py` in the src tree for an
1267-
example script that shows that this :func:`cohere_pairs` and
1268-
:func:`cohere` give the same results for a given pair.
1269-
1270-
See Also
1271-
--------
1272-
:func:`psd`
1273-
For information about the methods used to compute :math:`P_{xy}`,
1274-
:math:`P_{xx}` and :math:`P_{yy}`.
1275-
"""
1276-
numRows, numCols = X.shape
1277-
1278-
# zero pad if X is too short
1279-
if numRows < NFFT:
1280-
tmp = X
1281-
X = np.zeros((NFFT, numCols), X.dtype)
1282-
X[:numRows, :] = tmp
1283-
del tmp
1284-
1285-
numRows, numCols = X.shape
1286-
# get all the columns of X that we are interested in by checking
1287-
# the ij tuples
1288-
allColumns = set()
1289-
for i, j in ij:
1290-
allColumns.add(i)
1291-
allColumns.add(j)
1292-
Ncols = len(allColumns)
1293-
1294-
# for real X, ignore the negative frequencies
1295-
if np.iscomplexobj(X):
1296-
numFreqs = NFFT
1297-
else:
1298-
numFreqs = NFFT//2+1
1299-
1300-
# cache the FFT of every windowed, detrended NFFT length segment
1301-
# of every channel. If preferSpeedOverMemory, cache the conjugate
1302-
# as well
1303-
if np.iterable(window):
1304-
if len(window) != NFFT:
1305-
raise ValueError("The length of the window must be equal to NFFT")
1306-
windowVals = window
1307-
else:
1308-
windowVals = window(np.ones(NFFT, X.dtype))
1309-
ind = list(range(0, numRows-NFFT+1, NFFT-noverlap))
1310-
numSlices = len(ind)
1311-
FFTSlices = {}
1312-
FFTConjSlices = {}
1313-
Pxx = {}
1314-
slices = range(numSlices)
1315-
normVal = np.linalg.norm(windowVals)**2
1316-
for iCol in allColumns:
1317-
progressCallback(i/Ncols, 'Cacheing FFTs')
1318-
Slices = np.zeros((numSlices, numFreqs), dtype=np.complex_)
1319-
for iSlice in slices:
1320-
thisSlice = X[ind[iSlice]:ind[iSlice]+NFFT, iCol]
1321-
thisSlice = windowVals*detrend(thisSlice)
1322-
Slices[iSlice, :] = np.fft.fft(thisSlice)[:numFreqs]
1323-
1324-
FFTSlices[iCol] = Slices
1325-
if preferSpeedOverMemory:
1326-
FFTConjSlices[iCol] = np.conj(Slices)
1327-
Pxx[iCol] = np.divide(np.mean(abs(Slices)**2, axis=0), normVal)
1328-
del Slices, ind, windowVals
1329-
1330-
# compute the coherences and phases for all pairs using the
1331-
# cached FFTs
1332-
Cxy = {}
1333-
Phase = {}
1334-
count = 0
1335-
N = len(ij)
1336-
for i, j in ij:
1337-
count += 1
1338-
if count % 10 == 0:
1339-
progressCallback(count/N, 'Computing coherences')
1340-
1341-
if preferSpeedOverMemory:
1342-
Pxy = FFTSlices[i] * FFTConjSlices[j]
1343-
else:
1344-
Pxy = FFTSlices[i] * np.conj(FFTSlices[j])
1345-
if numSlices > 1:
1346-
Pxy = np.mean(Pxy, axis=0)
1347-
# Pxy = np.divide(Pxy, normVal)
1348-
Pxy /= normVal
1349-
# Cxy[(i,j)] = np.divide(np.absolute(Pxy)**2, Pxx[i]*Pxx[j])
1350-
Cxy[i, j] = abs(Pxy)**2 / (Pxx[i]*Pxx[j])
1351-
Phase[i, j] = np.arctan2(Pxy.imag, Pxy.real)
1352-
1353-
freqs = Fs/NFFT*np.arange(numFreqs)
1354-
if returnPxx:
1355-
return Cxy, Phase, freqs, Pxx
1356-
else:
1357-
return Cxy, Phase, freqs
1358-
1359-
13601197
@cbook.deprecated('2.2', 'scipy.stats.entropy')
13611198
def entropy(y, bins):
13621199
r"""

0 commit comments

Comments
 (0)