OBINexus AI Psychoacoustic Classifier A Constitutionally-Compliant Math-Driven Shouting Detector
DΞCIBΞLION is an audio intelligence module forged in the labs of OBINexus, where noise meets logic and shouting is a feature, not a bug. It mathematically analyzes human vocal input to determine emotional projection through log-scaled loudness evaluation, using a sacred constant: 85 dB.
Anything above that? It's shouting. Anything below? You're just passionate.
- 📏 Logarithmic Amplitude Scaling – Faithful to psychoacoustic perception.
- 🔊 Decibel-Based Emotion Segmentation – 85 dB: The Great Divider.
- 🧮 MFCC-Based CNN – Because everything's better when it's filtered through 40 coefficients of raw judgment.
- 🧱 Torch-Powered Architecture – Low-latency yelling recognition for real-time emotional analytics.
- 🧰 Dataset-ready Classifier – Accepts raw
.wavfiles like a champ.
ObinexusEmotionNet– CNN architecture for binary classification (Not Shouting / Shouting)VocalEmotionDataset– MFCC + decibel tagging from audio filesapply_log_scale()– Applies perceptual loudness transformation usinglog₁₀(1 + α|x|)visualize_mfcc()– See your screams visualized in all their MFCC glory
pip install numpy librosa torch matplotlibfrom decibelion import VocalEmotionDataset, ObinexusEmotionNet
# Load and visualize
visualize_mfcc('your_emotional_outburst.wav')
# Dataset and model
dataset = VocalEmotionDataset(['your_emotional_outburst.wav'])
model = ObinexusEmotionNet()
# Predict your shouting sins
x, y = dataset[0]
output = model(x.unsqueeze(0)) # Batch dimensionBy OBINexus Constitutional Mandate:
dB >= 85→ shoutingdB < 85→ tolerable emotional projectiondB == 84→ you, specifically
OBINexus ∞ MIT License with Embedded Shout Clause™ No yelling without constitutional justification.