Thanks to visit codestin.com
Credit goes to github.com

Skip to content

LinearLearner Issues #409

Closed
Closed
@antmarakis

Description

@antmarakis

I'm having some issues with the LinearLearner function in learning.py and I would like some help.

I noticed that the function does not seem to have pseudocode, even though in the script it is a standalone algorithm. At first it had some bugs and wouldn't execute, but when I fixed them (#408) it returned a result that confused me.

It does not use an activation function, so it quite probably does regression of sorts. But the given result is way off. This is what I did:

I tested the function on the Iris dataset. First, I converted classes names to numbers. So the targets are 0,1,2. Then I run the LinearLearner on the modified dataset. Every time it returns very large or very small numbers (eg. 1.00974582882851e+128). Even though technically I am asking for classification, it shouldn't produce numbers that off, since classes are converted to numbers in [0, 2].

About the same results are given when I remove one class to make this a binary problem. I even tested this with an activation function, but the numbers were so big that an OverflowError was raised.

Is there something I misunderstood about the function? If it is indeed wrong, since we don't currently have pseudocode, how should we act? Delete it? Try and make it work? Write a warning?


PS: The code I used to test the algorithm is the following:

def test_linear_learner():
    iris = DataSet(name="iris")

    classes = ["setosa","versicolor","virginica"]
    iris.classes_to_numbers()

    learner = LinearLearner(iris)
    assert learner([5,3,1,0.1]) in range(len(classes))

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions