Thanks to visit codestin.com
Credit goes to Github.com

Skip to content
/ EKGApp Public

Ios App for detecting ecg abnormalities using deep learning methods and viewing live ecg signal

License

Notifications You must be signed in to change notification settings

FBI223/EKGApp

Repository files navigation

πŸ“± ECGApp – Mobile ECG Viewer & Classifier

ECGApp is a cross-platform mobile application (iOS ) for real-time ECG signal acquisition, processing, and classification using deep learning models.


πŸ“š Table of Contents


🧠 Key Features

  • πŸ”Œ Connect to external BLE ECG sensors (e.g., ESP32, Arduino)

  • πŸ“ˆ Real-time signal visualization (zoom, pan, rescale)

  • ⏺ Record ECG to local .json and .dat and .hea format

  • 🧠 AI-based classification:

    • Heart rhythm classification (e.g. NSR, AF, PVC)
    • Beat-type classification near QRS centers (e.g. N, V, S)
    • Waveform segmentation per sample into P, QRS, T waves for morphological analysis
  • πŸ“‚ Browse, view, share or delete saved recordings



βš™οΈ App Settings

ECGApp includes a configurable Settings panel for adjusting signal processing and model parameters:

  • πŸ“Ά Sample Rate In: Set to match your BLE ECG device (e.g., 128β€―Hz)
  • 🧍 Demographics: Select user age and sex (used in AI model input)
  • πŸ“Š Y-Axis Range: Adjust ECG chart amplitude (Β±1 to Β±10β€―mV)
  • πŸŒ™ Dark Mode: Toggle light/dark UI
  • 🐞 Debug Info: Enable detailed internal logging

All settings are applied in real time. Correct configuration ensures accurate AI classification and visualization.


πŸ“¦ Used Deep Learning Models

1. Rhythm Classification – SE_MobileNet1D_noLSTM (PyTorch)

Lightweight SE-enhanced MobileNet 1D CNN with demographic inputs.

  • Input: 10-second 1-lead ECG (5000 samples of 500hz signal), age (normalized), sex (0/1)
  • Output: Single-label rhythm class (softmax)
  • Deployment: Converted via ONNX β†’ TensorFlow β†’ CoreML / TFLite
  • Datasets: CPSC 2018, CPSC Extra, PTB-XL, Georgia Dataset

Architecture Summary:

β†’ Conv1D(1 β†’ 16, kernel=7, stride=2, padding=3)
  β†’ BatchNorm1d(16)
  β†’ SiLU

β†’ DepthwiseConv1D(16 β†’ 16, kernel=5, stride=2, padding=2, groups=16)
β†’ PointwiseConv1D(16 β†’ 32, kernel=1)
  β†’ BatchNorm1d(32)
  β†’ SiLU
β†’ SEBlock(32):
     β†’ AdaptiveAvgPool1d(1)
     β†’ Linear(32 β†’ 4) β†’ ReLU
     β†’ Linear(4 β†’ 32) β†’ Sigmoid
     β†’ Multiply input Γ— scale
β†’ Dropout(0.1)

β†’ DepthwiseConv1D(32 β†’ 32, kernel=5, stride=2, padding=2, groups=32)
β†’ PointwiseConv1D(32 β†’ 64, kernel=1)
  β†’ BatchNorm1d(64)
  β†’ SiLU
β†’ SEBlock(64):
     β†’ AdaptiveAvgPool1d(1)
     β†’ Linear(64 β†’ 8) β†’ ReLU
     β†’ Linear(8 β†’ 64) β†’ Sigmoid
     β†’ Multiply input Γ— scale
β†’ Dropout(0.1)

β†’ DepthwiseConv1D(64 β†’ 64, kernel=5, stride=2, padding=2, groups=64)
β†’ PointwiseConv1D(64 β†’ 128, kernel=1)
  β†’ BatchNorm1d(128)
  β†’ SiLU
β†’ SEBlock(128):
     β†’ AdaptiveAvgPool1d(1)
     β†’ Linear(128 β†’ 16) β†’ ReLU
     β†’ Linear(16 β†’ 128) β†’ Sigmoid
     β†’ Multiply input Γ— scale
β†’ Dropout(0.1)

β†’ DepthwiseConv1D(128 β†’ 128, kernel=5, stride=1, padding=2, groups=128)
β†’ PointwiseConv1D(128 β†’ 128, kernel=1)
  β†’ BatchNorm1d(128)
  β†’ SiLU
β†’ SEBlock(128):
     β†’ AdaptiveAvgPool1d(1)
     β†’ Linear(128 β†’ 16) β†’ ReLU
     β†’ Linear(16 β†’ 128) β†’ Sigmoid
     β†’ Multiply input Γ— scale
β†’ Dropout(0.1)


2. Beat-Type Classification – CNN Model (Pytorch)

Trained on beats centered on QRS complexes from:

  • MIT-BIH Arrhythmia Database (mitdb)

  • INCART 12-lead Arrhythmia Database (incartdb)

  • SVDB

  • Input: 1D ECG window centered at QRS (540 samples of 360 hz signal)

  • Output: Beat class (N, V, S , F, Q )

  • Deployment: Used for inference after R-peak detection

Architecture Summary:

ECGClassifier:
Input: (B, 1, T) (np. 540 prΓ³bek @ 360 Hz)



FEATURE EXTRACTION
β†’ Conv1D(1 β†’ 32, kernel=7, stride=2, padding=3)
β†’ BatchNorm1d(32) β†’ ReLU

β†’ ResidualBlock(32):
  β†’ Conv1D(32 β†’ 32, kernel=3, padding=1)
  β†’ BatchNorm1d(32) β†’ ReLU
  β†’ Conv1D(32 β†’ 32, kernel=3, padding=1)
  β†’ BatchNorm1d(32)
  β†’ Skip connection + ReLU

β†’ SEBlock(32):
  β†’ AdaptiveAvgPool1d(1)
  β†’ Conv1D(32 β†’ 4, kernel=1) β†’ ReLU
  β†’ Conv1D(4 β†’ 32, kernel=1) β†’ Sigmoid
  β†’ Multiply (channel-wise scaling)

β†’ Conv1D(32 β†’ 64, kernel=5, stride=2, padding=2)
β†’ BatchNorm1d(64) β†’ ReLU

β†’ ResidualBlock(64):
  β†’ Conv1D(64 β†’ 64, kernel=3, padding=1)
  β†’ BatchNorm1d(64) β†’ ReLU
  β†’ Conv1D(64 β†’ 64, kernel=3, padding=1)
  β†’ BatchNorm1d(64)
  β†’ Skip connection + ReLU

β†’ SEBlock(64):
  β†’ AdaptiveAvgPool1d(1)
  β†’ Conv1D(64 β†’ 8, kernel=1) β†’ ReLU
  β†’ Conv1D(8 β†’ 64, kernel=1) β†’ Sigmoid
  β†’ Multiply (channel-wise scaling)




CLASSIFICATION HEAD
β†’ AdaptiveAvgPool1d(1)
β†’ Flatten: (B, 64)

β†’ Linear(64 β†’ 32)
β†’ ReLU
β†’ Dropout(0.5)

β†’ Linear(32 β†’ num_classes)

Output:
β†’ Shape: (B, num_classes) – logits of classes.



3. Waveform Segmentation – UNet1D (PyTorch)

Lightweight 1D U-Net model for per-sample waveform classification trained on LUDB dataset.

  • Input: 10-second 1-lead ECG segment (2000 samples of 500 Hz signal, lead II)
  • Output: Per-sample waveform class (none, P, QRS, T) using softmax over 4 classes
  • Deployment: Converted to CoreML (UnetModel.mlpackage) for real-time waveform segmentation
  • Datasets: LUDB (Lobachevsky University Database), 200 manually annotated 12-lead ECGs

Architecture Summary:

UNet1D:
Input: (B, 1, 2000)


ENCODER / DOWNSAMPLING

β†’ Block 1:
  β†’ Conv1D(1 β†’ 4, kernel=9, padding=4)
  β†’ BatchNorm1d(4) β†’ ReLU
  β†’ Conv1D(4 β†’ 4, kernel=9, padding=4)
  β†’ BatchNorm1d(4) β†’ ReLU
  β†’ MaxPool1D(kernel=2)             # ↓ T/2

β†’ Block 2:
  β†’ Conv1D(4 β†’ 8, kernel=9, padding=4)
  β†’ BatchNorm1d(8) β†’ ReLU
  β†’ Conv1D(8 β†’ 8, kernel=9, padding=4)
  β†’ BatchNorm1d(8) β†’ ReLU
  β†’ MaxPool1D(kernel=2)             # ↓ T/4

β†’ Block 3:
  β†’ Conv1D(8 β†’ 16, kernel=9, padding=4)
  β†’ BatchNorm1d(16) β†’ ReLU
  β†’ Conv1D(16 β†’ 16, kernel=9, padding=4)
  β†’ BatchNorm1d(16) β†’ ReLU
  β†’ MaxPool1D(kernel=2)             # ↓ T/8

β†’ Block 4:
  β†’ Conv1D(16 β†’ 32, kernel=9, padding=4)
  β†’ BatchNorm1d(32) β†’ ReLU
  β†’ Conv1D(32 β†’ 32, kernel=9, padding=4)
  β†’ BatchNorm1d(32) β†’ ReLU
  β†’ MaxPool1D(kernel=2)             # ↓ T/16




BOTTLENECK

β†’ Conv1D(32 β†’ 64, kernel=9, padding=4)
β†’ BatchNorm1d(64) β†’ ReLU
β†’ Conv1D(64 β†’ 64, kernel=9, padding=4)
β†’ BatchNorm1d(64) β†’ ReLU




DECODER / UPSAMPLING

β†’ TransposedConv1D(64 β†’ 32, kernel=8, stride=2, padding=3)
β†’ Pad to match enc4 β†’ Concat([up, enc4]) β†’ (64 channels)
β†’ Conv1D(64 β†’ 32) β†’ BN β†’ ReLU β†’ Conv1D β†’ BN β†’ ReLU

β†’ TransposedConv1D(32 β†’ 16, kernel=8, stride=2, padding=3)
β†’ Pad to match enc3 β†’ Concat([up, enc3]) β†’ (32 channels)
β†’ Conv1D(32 β†’ 16) β†’ BN β†’ ReLU β†’ Conv1D β†’ BN β†’ ReLU

β†’ TransposedConv1D(16 β†’ 8, kernel=8, stride=2, padding=3)
β†’ Pad to match enc2 β†’ Concat([up, enc2]) β†’ (16 channels)
β†’ Conv1D(16 β†’ 8) β†’ BN β†’ ReLU β†’ Conv1D β†’ BN β†’ ReLU

β†’ TransposedConv1D(8 β†’ 4, kernel=8, stride=2, padding=3)
β†’ Pad to match enc1 β†’ Concat([up, enc1]) β†’ (8 channels)
β†’ Conv1D(8 β†’ 4) β†’ BN β†’ ReLU β†’ Conv1D β†’ BN β†’ ReLU



OUTPUT

β†’ Final Conv1D(4 β†’ num_classes, kernel=1)  # Pointwise convolution
β†’ Output shape: (B, num_classes, T)
β†’ Permute to (B, T, num_classes) for per-sample classification






πŸ“Š Datasets Used

Dataset Role Format Notes
CPSC 2018 Rhythm classification mat 1-lead, SNOMED codes
CPSC 2018 Extra Rhythm classification mat Additional records
PTB-XL Rhythm classification mat With age and sex demographic data
Georgia Rhythm classification mat 12-lead
MIT-BIH (mitdb) Beat classification dat 360 Hz, QRS annotated
INCARTDB Beat classification dat 12-lead, annotated
SVDB Beat classification dat Supraventricular focus
LUDB Wave classification dat Waves focus

All signals were:

  • Resampled to 500 Hz (rhythm and wave) or 360 Hz (beat)
  • Normalized and denoised using wavelet transform (bior2.6)
  • Lead II extracted and used as input

⚠️ Medical Disclaimer

This application is not a certified medical device.

  • It has not been evaluated by any regulatory authorities (FDA, CE, EMA).

  • It is not intended for:

    • Diagnosing or monitoring medical conditions
    • Emergency or therapeutic use
  • AI predictions are experimental and may be inaccurate or misleading.

  • Signal quality may be affected by noise, motion, or hardware limitations.

  • If you experience symptoms (e.g., chest pain, arrhythmia, dizziness), contact a qualified physician immediately.

This app is intended only for research, prototyping, and educational purposes.

Use of this application is entirely at your own risk.


πŸ§ͺ Use Cases

  • ECG signal acquisition for biomedical prototyping
  • BLE hardware integration (Arduino/ESP32)
  • Testing real-time AI classification on mobile
  • Study of rhythm and beat-type classification models

πŸ”§ Tech Stack

  • SwiftUI + CoreBluetooth (iOS frontend)
  • PyTorch, Keras/TensorFlow (model training)
  • ONNX β†’ TensorFlow β†’ CoreML / TFLite (deployment)
  • WFDB, SciPy, PyWavelets (signal preprocessing)

πŸ” Evaluation & Metrics

Each model is evaluated with task-specific metrics:

Rhythm Classification

  • βœ… Accuracy, macro/weighted F1-score
  • βœ… Per-class precision and recall
  • βœ… Confusion matrix (10-class)
  • βœ… NSR fallback threshold:
    • softmax max < 0.4 β†’ fallback to NSR (59118001)

Beat-Type Classification

  • βœ… Per-beat accuracy (N, V, S, F, Q)
  • βœ… Inference on 540-sample QRS-centered segments
  • βœ… Real-time aggregation of beat-type predictions

Waveform Segmentation

  • βœ… Sample-wise F1-score and accuracy
  • βœ… Onset/offset match with Β±150 sample tolerance
  • βœ… Segment-wise precision, recall, and F1-score
  • βœ… Overlay visualization: predicted vs. true waveforms

πŸ“‚ Signal Format

Saved recordings are stored in .json format:

{
  "fs": 500,
  "leads": ["II"],
  "signals": [[0.003, 0.002, 0.005]],
  "start_time": "2025-06-01T12:01:00Z",
  "end_time": "2025-06-01T12:01:10Z"
}
  • fs: Sampling frequency (Hz)
  • leads: Array of lead names (e.g., ["II"])
  • signals: List of signal arrays (1 per lead)
  • start_time, end_time: ISO 8601 timestamps

Saved recordings are stored in .hea format:

record123 1 500 5000
record123.dat 16 1000 16 0 0 200 0 II
# age: 26
# sex: M
# duration: 10 seconds
# start_time: 2025-06-01T12:01:00Z
# end_time: 2025-06-01T12:01:10Z
# Recorded via ECG mobile app

  • Line 1: <record_name> <n_signals> <n_samples>
  • Line 2: <bit_res> <adc_zero> <init_val> <adc_resolution> 0
  • Comments (#) include age, sex, start/end time, duration

Saved recordings are stored in .dat format:

Signal samples saved as Int16, scaled by gain Written in little-endian format Lead order matches .hea


πŸ“₯ Import & Interoperability

The app supports importing ECG files in multiple formats:

  • .dat + .hea (WFDB-compliant):
    • e.g., MITDB, INCARTDB, LUDB
  • .json format exported by the ECGApp

All imported signals are parsed and normalized, including:

  • Sampling frequency (fs)
  • Number of samples
  • Gain, resolution, zero offset
  • Lead name (e.g., II)

Supported encoding:

  • 🧩 16-bit integer (format 16)
  • βœ… Single-lead inputs (Lead II preferred)

☁️ iCloud & File Sharing

ECGApp fully integrates with the iOS Files system, enabling secure cloud-based data access:

  • πŸ“€ Export ECG recordings to iCloud Drive for backup or manual inspection
  • πŸ“₯ Import .dat / .hea / .json files from iCloud, USB, or AirDrop
  • πŸ” Seamlessly transfer data between iPhone, iPad, or Mac
  • πŸ“ Files are available under Files β†’ ECGApp β†’ On My iPhone

This makes it easy to analyze, review, or back up ECG sessions securely.


πŸ›‘οΈ Data Privacy

All ECG data is processed and stored locally on the user’s device.
No recordings or personal information are sent to external servers.

  • πŸ“± All AI inference and signal analysis are performed on-device
  • ☁️ If the user chooses to share recordings via iCloud, AirDrop, or Files, the transfer is secure and user-controlled
  • πŸ›‘ The app does not collect, upload, or transmit any data automatically
  • βœ… No third-party analytics, tracking, or background syncing

Users have full control over their data:

  • πŸ“‚ Access and manage saved recordings (.json, .dat, .hea)
  • πŸ—‘ Manually delete or export any file
  • πŸ”’ Bluetooth connections are limited to approved UUIDs only

Your data remains yours β€” private, secure, and under your control.


πŸ“¦ Future Work (Planned)

  • πŸ’‘ Multi-lead ECG support (e.g., V1, V5, aVR)
  • πŸ§ͺ Personalized on-device model fine-tuning
  • 🩺 AI-based tagging of symptoms and abnormalities
  • πŸ“ˆ Long-term trend tracking and visualization

πŸ“œ License

MIT License β€” For research and non-commercial use only.


Authors

Marcin Sztukowski student of Jagiellonian University in Krakow

About

Ios App for detecting ecg abnormalities using deep learning methods and viewing live ecg signal

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages