Thanks to visit codestin.com
Credit goes to github.com

Skip to content

weertman/echaino

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EchAIno — Small-Object Annotator & Analyzer

EchAIno is a desktop tool for annotating and analyzing small marine organisms (e.g., urchin larvae) with Segment Anything (SAM) and YOLO. Built for biologists: no coding required and nothing is uploaded.


1) What it does (very high-level)

EchAIno turns pictures taken on a document scanner into clean measurements.

  • Draw boxes around organisms.
  • The app tiles the image and runs SAM to convert each box into a pixel-accurate mask.
  • A Measurement Engine converts pixels to real units (after one-time scale calibration) and reports area, perimeter, diameter, centroid.
  • (Optional) YOLO + SAHI can auto-detect small objects inside a Region of Interest (ROI).
  • Export to CSV (measurements) and COCO JSON (polygons), or produce a cross-archive HTML report with plots.

Workflow (conceptual): Your Image → Draw boxes → SAM masks → Measurements → (optional) YOLO detections → Exports/Reports/Training


2) Setup (no CS background needed)

A. Install software

  1. Install Anaconda or Miniconda (any OS).

  2. Open the terminal (Anaconda Prompt on Windows) and run:

    conda create -n echaino python=3.10 -y
    conda activate echaino
    python -m pip install --upgrade pip
    pip install pyqt6 numpy opencv-python pillow pyyaml matplotlib pandas seaborn shapely scipy statsmodels tqdm

  3. Install PyTorch (pick ONE line):

  • CPU only (easiest):
    pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
  • GPU (faster; CUDA example):
    pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
  1. Install vision libraries:

    pip install ultralytics sahi segment-anything

B. Put model files in place (once)

EchAIno uses two kinds of model weights:
1️⃣ SAM (Segment Anything Model) for creating segmentation masks
2️⃣ (optional) YOLO for automatic object detection


1. Download the SAM model checkpoint

The SAM model was released by Meta AI (FAIR) and is publicly available on GitHub.

Here’s how to get it safely:

Option A — Direct link (recommended for most users):

  1. Open the official SAM model page:
    🔗 https://github.com/facebookresearch/segment-anything

  2. Scroll to the section titled Model Checkpoints.

  3. You’ll see download links such as:

    • vit_h (highest accuracy, largest model)
      👉 sam_vit_h_4b8939.pth
    • vit_l (medium)
    • vit_b (fastest, smaller)
  4. Click the link to download your preferred model — typically the large one (vit_h) if you have enough RAM/GPU memory.

  5. Once downloaded, create a folder called models/ inside your EchAIno project directory (same level as main.py), and place the .pth file there:

  6. Update your config.yaml if needed:

yaml sam:

model_type: "vit_h"
checkpoint: "./models/sam_vit_h_4b8939.pth"
device: "cuda" # or "cpu"

3) How to use the tool

Start the app

From your project folder:

conda activate echaino
python main.py --config config.yaml
# (optional) add --image path/to/image.jpg to pre-load an image

60-second workflow

  1. Open image (File → Open).
  2. Draw boxes around organisms.
  3. Calibrate scale (View → Calibrate Scale) by drawing a ruler line on something with known length (e.g., 1 cm).
  4. Click Process Annotations. SAM converts each box to a mask and measurements are computed.
  5. Export:
    • CSV (area/perimeter/diameter, pixel + real units if calibrated)
    • COCO JSON (polygon annotations)

Optional features

  • YOLO detections for tiny objects

    1. Click Select ROI and drag over the area of interest.
    2. Choose your *.pt model path.
    3. Click Run YOLO Detection.
    4. Review, delete any false positives, and (optionally) click Process Annotations to refine with SAM.
  • Train your own YOLO model
    Use menu YOLO → Train YOLO Model…

    • Select archives to build a dataset from your annotations.
    • Set epochs, batch size, and image size.
    • Training produces a new model file in models/ for future detections.
  • Cross-archive analysis & report
    Use Analysis → Cross-Archive Comparison to select multiple archives and metrics (area, perimeter, diameter).
    The app writes a timestamped HTML report to reports/ (includes violin plots, ridge plots, stats).

Helpful UI notes

  • Manage Classes: add/rename classes; colors are assigned to keep them visually distinct.
  • Edit Mode (Shift+E): move/resize boxes, delete with Delete key, change class with number keys (1–9/0).
  • Tile Grid: toggle in the control panel to see processing tiles.
  • Debug Mode: enables a tile debug viewer and extra logs (saved under debug_output/).
  • Autosave: every 60 s to archives/…/project_*.json. Manual Save Project also available.

Troubleshooting

  • “No boxes to process” → Draw boxes first.
  • “No scale available” → Calibrate (Ctrl+R) or type a value (px/cm) in the Scale field.
  • Masks look empty → Turn strict_clip to false, draw slightly larger boxes, or try a different tile size.
  • Slow processing → Use GPU PyTorch; set smaller base_tile_size; try sam.model_type: vit_b.
  • PyTorch install issues → Use the CPU command first (works everywhere).

Quick Start

  1. Create env: conda create -n echaino python=3.10 -y && conda activate echaino
  2. Install deps: pip install pyqt6 numpy opencv-python pillow pyyaml matplotlib pandas seaborn shapely scipy statsmodels tqdm ultralytics sahi segment-anything
  3. Install PyTorch (CPU or GPU).
  4. Put SAM/YOLO weights in ./models/.
  5. Save the config.yaml above.
  6. Run: python main.py --config config.yaml
  7. Open image → draw boxes → calibrate → Process AnnotationsExport.
  8. Optional: Select ROIRun YOLO Detection; YOLO → Train; Analysis → Cross-Archive.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages