Thanks to visit codestin.com
Credit goes to github.com

Skip to content

test_neural_network_learner() fails on initial test suite run. #744

Closed
@roberthoenig

Description

@roberthoenig

After following the instructions to set up this repo, I ran py.test for the first time, with the following output:

robert@robhoenig:~/git/aima-python$ py.test
============================= test session starts ==============================
platform linux -- Python 3.5.2, pytest-3.4.0, py-1.5.2, pluggy-0.6.0
rootdir: /home/robert/git/aima-python, inifile: pytest.ini
collected 225 items                                                            

tests/test_agents.py .......                                             [  3%]
tests/test_csp.py ...........................                            [ 15%]
tests/test_games.py ...                                                  [ 16%]
tests/test_knowledge.py ........                                         [ 20%]
tests/test_learning.py ................F..                               [ 28%]
tests/test_logic.py ....................................                 [ 44%]
tests/test_mdp.py ....                                                   [ 46%]
tests/test_nlp.py ....................                                   [ 55%]
tests/test_planning.py .........                                         [ 59%]
tests/test_probability.py ...................                            [ 67%]
tests/test_rl.py ...                                                     [ 68%]
tests/test_search.py .................                                   [ 76%]
tests/test_text.py ...............                                       [ 83%]
tests/test_utils.py ......................................               [100%]

=================================== FAILURES ===================================
_________________________ test_neural_network_learner __________________________

    def test_neural_network_learner():
        iris = DataSet(name="iris")
        classes = ["setosa", "versicolor", "virginica"]
        iris.classes_to_numbers(classes)
        nNL = NeuralNetLearner(iris, [5], 0.15, 75)
        tests = [([5.0, 3.1, 0.9, 0.1], 0),
                 ([5.1, 3.5, 1.0, 0.0], 0),
                 ([4.9, 3.3, 1.1, 0.1], 0),
                 ([6.0, 3.0, 4.0, 1.1], 1),
                 ([6.1, 2.2, 3.5, 1.0], 1),
                 ([5.9, 2.5, 3.3, 1.1], 1),
                 ([7.5, 4.1, 6.2, 2.3], 2),
                 ([7.3, 4.0, 6.1, 2.4], 2),
                 ([7.0, 3.3, 6.1, 2.5], 2)]
        assert grade_learner(nNL, tests) >= 1/3
>       assert err_ratio(nNL, iris) < 0.2
E       assert 0.20666666666666667 < 0.2
E        +  where 0.20666666666666667 = err_ratio(<function NeuralNetLearner.<locals>.predict at 0x7f233dcb29d8>, <DataSet(iris): 150 examples, 5 attributes>)

tests/test_learning.py:195: AssertionError
==================== 1 failed, 224 passed in 24.27 seconds =====================

All subsequent executions of the test suite passed completely, making this appear like a rare nondeterministic failure. If so, it must be extraordinarily rare, since

for x in {1..100}; do py.test -k test_learning.py; done

passed without a single failure. Print-debugging test_neural_network_learner() showed me that the normal value returned by err_ratio() is 0.1266666666666667.
Since this failure only happened on the initial run of the test suite, I tried reproducing it in another aima-python clone, and by removing .pytest_cache, but without success.

Any guesses what's going on here?

Note: I tried this on Ubuntu 16.04.
Another note: Thanks for the amazing book and repo. I started digging around in the code base today, and what I've seen so far looks clear, well-documented, and perfect for learning in general.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions