Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit df33d47

Browse files
dmeoliantmarakis
authored andcommitted
fixed numpy imports (aimacode#1145)
* changed queue to set in AC3 Changed queue to set in AC3 (as in the pseudocode of the original algorithm) to reduce the number of consistency-check due to the redundancy of the same arcs in queue. For example, on the harder1 configuration of the Sudoku CSP the number consistency-check has been reduced from 40464 to 12562! * re-added test commented by mistake * added the mentioned AC4 algorithm for constraint propagation AC3 algorithm has non-optimal worst case time-complexity O(cd^3 ), while AC4 algorithm runs in O(cd^2) worst case time * added doctest in Sudoku for AC4 and and the possibility of choosing the constant propagation algorithm in mac inference * removed useless doctest for AC4 in Sudoku because AC4's tests are already present in test_csp.py * added map coloring SAT problems * fixed typo errors and removed unnecessary brackets * reformulated the map coloring problem * Revert "reformulated the map coloring problem" This reverts commit 20ab0e5. * Revert "fixed typo errors and removed unnecessary brackets" This reverts commit f743146. * Revert "added map coloring SAT problems" This reverts commit 9e0fa55. * Revert "removed useless doctest for AC4 in Sudoku because AC4's tests are already present in test_csp.py" This reverts commit b3cd24c. * Revert "added doctest in Sudoku for AC4 and and the possibility of choosing the constant propagation algorithm in mac inference" This reverts commit 6986247. * Revert "added the mentioned AC4 algorithm for constraint propagation" This reverts commit 03551fb. * added map coloring SAT problem * fixed build error * Revert "added map coloring SAT problem" This reverts commit 93af259. * Revert "fixed build error" This reverts commit 6641c2c. * added map coloring SAT problem * removed redundant parentheses * added Viterbi algorithm * added monkey & bananas planning problem * simplified condition in search.py * added tests for monkey & bananas planning problem * removed monkey & bananas planning problem * Revert "removed monkey & bananas planning problem" This reverts commit 9d37ae0. * Revert "added tests for monkey & bananas planning problem" This reverts commit 24041e9. * Revert "simplified condition in search.py" This reverts commit 6d229ce. * Revert "added monkey & bananas planning problem" This reverts commit c74933a. * defined the PlanningProblem as a specialization of a search.Problem & fixed typo errors * fixed doctest in logic.py * fixed doctest for cascade_distribution * added ForwardPlanner and tests * added __lt__ implementation for Expr * added more tests * renamed forward planner * Revert "renamed forward planner" This reverts commit c4139e5. * renamed forward planner class & added doc * added backward planner and tests * fixed mdp4e.py doctests * removed ignore_delete_lists_heuristic flag * fixed heuristic for forward and backward planners * added SATPlan and tests * fixed ignore delete lists heuristic in forward and backward planners * fixed backward planner and added tests * updated doc * added nary csp definition and examples * added CSPlan and tests * fixed CSPlan * added book's cryptarithmetic puzzle example * fixed typo errors in test_csp * fixed aimacode#1111 * added sortedcontainers to yml and doc to CSPlan * added tests for n-ary csp * fixed utils.extend * updated test_probability.py * converted static methods to functions * added AC3b and AC4 with heuristic and tests * added conflict-driven clause learning sat solver * added tests for cdcl and heuristics * fixed probability.py * fixed import * fixed kakuro * added Martelli and Montanari rule-based unification algorithm * removed duplicate standardize_variables * renamed variables known as built-in functions * fixed typos in learning.py * renamed some files and fixed typos * fixed typos * fixed typos * fixed tests * removed unify_mm * remove unnecessary brackets * fixed tests * moved utility functions to utils.py * fixed typos * moved utils function to utils.py, separated probability learning classes from learning.py, fixed typos and fixed imports in .ipynb files * added missing learners * fixed Travis build * fixed typos * fixed typos * fixed typos * fixed typos * fixed typos in agents files * fixed imports in agent files * fixed deep learning .ipynb imports * fixed typos * added SVM * added .ipynb and fixed typos * adapted code for .ipynb * fixed typos * updated .ipynb * updated .ipynb * updated logic.py * updated .ipynb * updated .ipynb * updated planning.py * updated inf definition * fixed typos * fixed typos * fixed typos * fixed typos * Revert "fixed typos" This reverts commit 658309d. * Revert "fixed typos" This reverts commit 08ad660. * fixed typos * fixed typos * fixed typos * fixed typos * fixed typos and utils imports in *4e.py files * fixed typos * fixed typos * fixed typos * fixed typos * fixed import * fixed typos * fixed typos * fixd typos * fixed typos * fixed typos * updated SVM * added svm test * fixed SVM and tests * fixed some definitions and typos * fixed svm and tests * added SVMs also in learning4e.py * fixed inf definition * fixed .travis.yml * fixed .travis.yml * fixed import * fixed inf definition * replaced cvxopt with qpsolvers * replaced cvxopt with quadprog * fixed some definitions * fixed typos and removed unnecessary tests * replaced quadprog with qpsolvers * fixed extend in utils * specified error type in try-catch block * fixed extend in utils * fixed typos * fixed learning.py * fixed doctest errors * added comments * removed unnecessary if condition * updated learning.py * fixed imports * removed unnecessary imports * fixed keras imports * fixed typos * fixed learning_curve * added comments * fixed typos * removed inf and isclose definition from utils and replaced with numpy.inf and numpy.isclose * fixed doctests * fixed numpy imports * fixed superclass call * removed utils import from 4e py file * removed unnecessary norm function in utils and fixed Activation definition
1 parent 04b3326 commit df33d47

File tree

6 files changed

+29
-34
lines changed

6 files changed

+29
-34
lines changed

deep_learning4e.py

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
from keras.layers import Embedding, SimpleRNN, Dense
99
from keras.preprocessing import sequence
1010

11-
from utils4e import (sigmoid, dot_product, softmax1D, conv1D, gaussian_kernel, element_wise_product, vector_add,
11+
from utils4e import (Sigmoid, dot_product, softmax1D, conv1D, gaussian_kernel, element_wise_product, vector_add,
1212
random_weights, scalar_vector_product, matrix_multiplication, map_vector, mse_loss)
1313

1414

@@ -37,7 +37,7 @@ class NNUnit(Node):
3737
"""
3838

3939
def __init__(self, weights=None, value=None):
40-
super(NNUnit, self).__init__(value)
40+
super().__init__(value)
4141
self.weights = weights or []
4242

4343

@@ -59,7 +59,7 @@ class OutputLayer(Layer):
5959
"""1D softmax output layer in 19.3.2"""
6060

6161
def __init__(self, size=3):
62-
super(OutputLayer, self).__init__(size)
62+
super().__init__(size)
6363

6464
def forward(self, inputs):
6565
assert len(self.nodes) == len(inputs)
@@ -73,7 +73,7 @@ class InputLayer(Layer):
7373
"""1D input layer. Layer size is the same as input vector size."""
7474

7575
def __init__(self, size=3):
76-
super(InputLayer, self).__init__(size)
76+
super().__init__(size)
7777

7878
def forward(self, inputs):
7979
"""Take each value of the inputs to each unit in the layer."""
@@ -92,10 +92,10 @@ class DenseLayer(Layer):
9292
"""
9393

9494
def __init__(self, in_size=3, out_size=3, activation=None):
95-
super(DenseLayer, self).__init__(out_size)
95+
super().__init__(out_size)
9696
self.out_size = out_size
9797
self.inputs = None
98-
self.activation = sigmoid() if not activation else activation
98+
self.activation = Sigmoid() if not activation else activation
9999
# initialize weights
100100
for node in self.nodes:
101101
node.weights = random_weights(-0.5, 0.5, in_size)
@@ -118,7 +118,7 @@ class ConvLayer1D(Layer):
118118
"""
119119

120120
def __init__(self, size=3, kernel_size=3):
121-
super(ConvLayer1D, self).__init__(size)
121+
super().__init__(size)
122122
# init convolution kernel as gaussian kernel
123123
for node in self.nodes:
124124
node.weights = gaussian_kernel(kernel_size)
@@ -142,7 +142,7 @@ class MaxPoolingLayer1D(Layer):
142142
"""
143143

144144
def __init__(self, size=3, kernel_size=3):
145-
super(MaxPoolingLayer1D, self).__init__(size)
145+
super().__init__(size)
146146
self.kernel_size = kernel_size
147147
self.inputs = None
148148

@@ -326,7 +326,7 @@ class BatchNormalizationLayer(Layer):
326326
"""Batch normalization layer."""
327327

328328
def __init__(self, size, epsilon=0.001):
329-
super(BatchNormalizationLayer, self).__init__(size)
329+
super().__init__(size)
330330
self.epsilon = epsilon
331331
# self.weights = [beta, gamma]
332332
self.weights = [0, 0]

learning.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -265,9 +265,9 @@ def cross_validation_wrapper(learner, dataset, k=10, trials=1):
265265
while True:
266266
errT, errV = cross_validation(learner, dataset, size, k, trials)
267267
# check for convergence provided err_val is not empty
268-
if errT and not np.isclose(errT[-1], errT, rel_tol=1e-6):
268+
if errT and not np.isclose(errT[-1], errT, rtol=1e-6):
269269
best_size = 0
270-
min_val = inf
270+
min_val = np.inf
271271
i = 0
272272
while i < size:
273273
if errs[i] < min_val:

learning4e.py

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@
77
from qpsolvers import solve_qp
88

99
from probabilistic_learning import NaiveBayesLearner
10-
from utils import sigmoid, sigmoid_derivative
1110
from utils4e import *
1211

1312

@@ -265,9 +264,9 @@ def model_selection(learner, dataset, k=10, trials=1):
265264
while True:
266265
err = cross_validation(learner, dataset, size, k, trials)
267266
# check for convergence provided err_val is not empty
268-
if err and not isclose(err[-1], err, rel_tol=1e-6):
267+
if err and not np.isclose(err[-1], err, rtol=1e-6):
269268
best_size = 0
270-
min_val = inf
269+
min_val = np.inf
271270
i = 0
272271
while i < size:
273272
if errs[i] < min_val:
@@ -569,8 +568,8 @@ def LogisticLinearLeaner(dataset, learning_rate=0.01, epochs=100):
569568
# pass over all examples
570569
for example in examples:
571570
x = [1] + example
572-
y = sigmoid(dot_product(w, x))
573-
h.append(sigmoid_derivative(y))
571+
y = Sigmoid().f(dot_product(w, x))
572+
h.append(Sigmoid().derivative(y))
574573
t = example[idx_t]
575574
err.append(t - y)
576575

@@ -581,7 +580,7 @@ def LogisticLinearLeaner(dataset, learning_rate=0.01, epochs=100):
581580

582581
def predict(example):
583582
x = [1] + example
584-
return sigmoid(dot_product(w, x))
583+
return Sigmoid().f(dot_product(w, x))
585584

586585
return predict
587586

tests/test_deep_learning4e.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
import numpy as np
21
import pytest
32
from keras.datasets import imdb
43

utils.py

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -273,11 +273,6 @@ def normalize(dist):
273273
return [(n / total) for n in dist]
274274

275275

276-
def norm(x, ord=2):
277-
"""Return the n-norm of vector x."""
278-
return np.linalg.norm(x, ord)
279-
280-
281276
def random_weights(min_value, max_value, num_weights):
282277
return [random.uniform(min_value, max_value) for _ in range(num_weights)]
283278

utils4e.py

Lines changed: 13 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -92,6 +92,10 @@ def remove_all(item, seq):
9292
"""Return a copy of seq (or string) with all occurrences of item removed."""
9393
if isinstance(seq, str):
9494
return seq.replace(item, '')
95+
elif isinstance(seq, set):
96+
rest = seq.copy()
97+
rest.remove(item)
98+
return rest
9599
else:
96100
return [x for x in seq if x != item]
97101

@@ -368,11 +372,6 @@ def normalize(dist):
368372
return [(n / total) for n in dist]
369373

370374

371-
def norm(x, ord=2):
372-
"""Return the n-norm of vector x."""
373-
return np.linalg.norm(x, ord)
374-
375-
376375
def random_weights(min_value, max_value, num_weights):
377376
return [random.uniform(min_value, max_value) for _ in range(num_weights)]
378377

@@ -402,7 +401,10 @@ def gaussian_kernel_2D(size=3, sigma=0.5):
402401

403402
class Activation:
404403

405-
def derivative(self, value):
404+
def f(self, x):
405+
pass
406+
407+
def derivative(self, x):
406408
pass
407409

408410

@@ -418,7 +420,7 @@ def softmax1D(x):
418420
return [exp / sum_exps for exp in exps]
419421

420422

421-
class sigmoid(Activation):
423+
class Sigmoid(Activation):
422424

423425
def f(self, x):
424426
if x >= 100:
@@ -431,7 +433,7 @@ def derivative(self, value):
431433
return value * (1 - value)
432434

433435

434-
class relu(Activation):
436+
class Relu(Activation):
435437

436438
def f(self, x):
437439
return max(0, x)
@@ -440,7 +442,7 @@ def derivative(self, value):
440442
return 1 if value > 0 else 0
441443

442444

443-
class elu(Activation):
445+
class Elu(Activation):
444446

445447
def f(self, x, alpha=0.01):
446448
return x if x > 0 else alpha * (np.exp(x) - 1)
@@ -449,7 +451,7 @@ def derivative(self, value, alpha=0.01):
449451
return 1 if value > 0 else alpha * np.exp(value)
450452

451453

452-
class tanh(Activation):
454+
class Tanh(Activation):
453455

454456
def f(self, x):
455457
return np.tanh(x)
@@ -458,7 +460,7 @@ def derivative(self, value):
458460
return 1 - (value ** 2)
459461

460462

461-
class leaky_relu(Activation):
463+
class LeakyRelu(Activation):
462464

463465
def f(self, x, alpha=0.01):
464466
return x if x > 0 else alpha * x

0 commit comments

Comments
 (0)