Thanks to visit codestin.com
Credit goes to github.com

Skip to content

mzouink/cremi_python

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

63 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CREMI Python Scripts

Python scripts associated with the CREMI Challenge.

Installation

If you are using pip, installing the scripts is as easy as

pip install git+https://github.com/cremi/cremi_python.git

Alternatively, you can clone this repository yourself and use distutils

python setup.py install

or just include the cremi_python directory to your PYTHONPATH.

Evaluation

For each of the challenge categories, you find an evaluation class in cremi.evaluation, which are NeuronIds, Clefts, and SynapticPartners.

After you read a test file test and a ground truth file truth, you can evaluate your results by instantiating these classes as follows:

from cremi.evaluation import NeuronIds, Clefts, SynapticPartners

neuron_ids_evaluation = NeuronIds(truth.read_neuron_ids())
(voi_split, voi_merge) = neuron_ids_evaluation.voi(test.read_neuron_ids())
adapted_rand = neuron_ids_evaluation.adapted_rand(test.read_neuron_ids())

clefts_evaluation = Clefts(test.read_clefts(), truth.read_clefts())
fp_count = clefts_evaluation.count_false_positives()
fn_count = clefts_evaluation.count_false_negatives()
fp_stats = clefts_evaluation.acc_false_positives()
fn_stats = clefts_evaluation.acc_false_negatives()

synaptic_partners_evaluation = SynapticPartners()
fscore = synaptic_partners_evaluation.fscore(
    test.read_annotations(),
    truth.read_annotations(),
    truth.read_neuron_ids())

See the included example_evaluate.py for more details. The metrics are described in more detail on the CREMI Challenge website.

Acknowledgements

Evaluation code contributed by:

About

Python scripts to read and write the CREMI hdf5 file format.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%