Unified Python toolkit for Math, AI, ML, Quantum, Crypto, Vision, Graphs & GenAI — built for students, researchers, and developers.
STMATH is a modular, educational, and developer-friendly Python library for mathematics, AI, ML, quantum computing, cryptography, vision, graph algorithms, time-series analysis, and GenAI helpers. It is designed for researchers, students, and educators who want clean, reusable functions with consistent APIs and broad domain coverage.
Scientific computing in Python is typically fragmented across multiple specialized libraries such as NumPy, SciPy, SymPy, scikit-learn, and domain-specific toolkits for cryptography, optimization, and quantum computing. While powerful, these libraries can be heavy for educational use and rapid interdisciplinary prototyping.
STMATH addresses this gap by providing a unified, lightweight, and modular mathematical toolkit that integrates core mathematics, statistics, machine learning metrics, cryptography, quantum utilities, vision mathematics, and optimization into a single coherent API. This design is particularly suited for students, educators, and early-stage researchers who require a broad yet consistent computational foundation without complex dependency stacks.
- Main Features
- Installation
- Testing
- Domains Covered
- Citation
- License
- Documentation
- Contributing
- Benchmarks
- About STMATH
STMATH offers:
- Statistics & Probability
- ML Metrics
- Deep Learning Utilities
- GenAI Math Tools
- Cryptography Utilities
- Quantum Computing Helpers
- Graph Algorithms
- Time Series Analysis
- Number Theory
- Vision Utilities
- Optimization Algorithms
- Finance & Aptitude Math
- Benchmarking Tools
Detailed domain-wise function documentation is provided below for reference and educational use.
!pip install stmath!pip install --upgrade stmathSTMATH includes a test suite to verify the correctness of core mathematical, statistical, and algorithmic functions.
To run tests locally:
pip install -r requirements.txt
pytest
- Core Math & Scientific Functions
- Statistics & Probability
- ML & DL Metrics
- GEN-AI Math
- Graph Algorithms
- Vision Utilities
- Optimizers
- Finance & Aptitude
- Cryptography
- Quantum Computing
- Time Series Analysis
- Number Theory
- Benchmarking
Saksham Tomar. STMATH: A Modular Python Library for Unified Mathematical Computing Across Scientific Domains. 2025.
GitHub repository: https://github.com/saksham-1020/STMATH
import stmath as am
print(am.add(10, 5)) # → 15
print(am.sub(10, 5)) # → 5
print(am.mul(10, 5)) # → 50
print(am.div(10, 5)) # → 2.0
print(am.square(4)) # → 16
print(am.cube(3)) # → 27
print(am.sqrt(16)) # → 4.0
print(am.power(2, 3)) # → 8-
add(a, b)
- Syntax:
am.add(a, b) - Example:
am.add(10, 5)→ 15 - Formula: a + b
- Syntax:
-
sub(a, b)
- Syntax:
am.sub(a, b) - Example:
am.sub(10, 5)→ 5 - Formula: a − b
- Syntax:
-
mul(a, b)
- Syntax:
am.mul(a, b) - Example:
am.mul(10, 5)→ 50 - Formula: a × b
- Syntax:
-
div(a, b)
- Syntax:
am.div(a, b) - Example:
am.div(10, 5)→ 2.0 - Formula: a ÷ b
- Syntax:
-
square(x)
- Syntax:
am.square(x) - Example:
am.square(4)→ 16 - Formula: x²
- Syntax:
-
cube(x)
- Syntax:
am.cube(x) - Example:
am.cube(3)→ 27 - Formula: x³
- Syntax:
-
sqrt(x)
- Syntax:
am.sqrt(x) - Example:
am.sqrt(16)→ 4.0 - Formula: √x
- Syntax:
-
power(x, y)
- Syntax:
am.power(x, y) - Example:
am.power(2, 3)→ 8 - Formula: xʸ
- Syntax:
-
percent(part, whole)
- Syntax:
am.percent(part, whole) - Example:
am.percent(50, 200)→ 25.0 - Formula: (part ÷ whole) × 100
- Syntax:
-
percent_change(old, new)
- Syntax:
am.percent_change(old, new) - Example:
am.percent_change(100, 120)→ 20.0 - Formula: (new − old) ÷ old × 100
- Syntax:
import stmath as am
print(am.exp(1)) # → 2.718 (Euler’s number e^1)
print(am.log(10)) # → 2.302 (Natural log)
print(am.log10(100)) # → 2.0 (Base‑10 log)
print(am.sin(3.14)) # → 0.00159
print(am.cos(3.14)) # → -1.0
print(am.tan(0.785)) # → 1.0
print(am.sinh(1)) # → 1.175
print(am.cosh(1)) # → 1.543
print(am.tanh(1)) # → 0.761
print(am.deg2rad(180)) # → 3.14159
print(am.rad2deg(3.14159))# → 180.0-
sin(x)
- Syntax:
am.sin(x) - Example:
am.sin(am.pi/2)→ 1.0 - Formula: sin(x)
- Syntax:
-
cos(x)
- Syntax:
am.cos(x) - Example:
am.cos(0)→ 1.0 - Formula: cos(x)
- Syntax:
-
tan(x)
- Syntax:
am.tan(x) - Example:
am.tan(am.pi/4)→ 1.0 - Formula: tan(x)
- Syntax:
-
log10(x)
- Syntax:
am.log10(x) - Example:
am.log10(100)→ 2.0 - Formula: log₁₀(x)
- Syntax:
-
ln(x)
- Syntax:
am.ln(x) - Example:
am.ln(am.e)→ 1.0 - Formula: ln(x)
- Syntax:
-
exp(x)
- Syntax:
am.exp(x) - Example:
am.exp(1)→ 2.718 - Formula: eˣ
- Syntax:
-
factorial(n)
- Syntax:
am.factorial(n) - Example:
am.factorial(5)→ 120 - Formula: n! = 1 × 2 × 3 × ... × n
- Syntax:
-
deg2rad(deg)
- Syntax:
am.deg2rad(deg) - Example:
am.deg2rad(180)→ 3.14159 - Formula: (π ÷ 180) × deg
- Syntax:
-
rad2deg(rad)
- Syntax:
am.rad2deg(rad) - Example:
am.rad2deg(am.pi)→ 180 - Formula: (180 ÷ π) × rad
- Syntax:
import stmath as am
print(am.mean([1,2,3,4,5])) # → 3.0
print(am.variance([1,2,3,4,5])) # → 2.5
print(am.std_dev([1,2,3,4,5])) # → 1.58
print(am.binomial_pmf(n=5, k=2, p=0.5)) # → 0.3125
print(am.normal_pdf(x=0, mean=0, sd=1)) # → 0.3989-
mean(data)
- Syntax:
am.mean(data) - Example:
am.mean([10, 20, 30])→ 20 - Formula: (Σxᵢ) ÷ n
- Syntax:
-
median(data)
- Syntax:
am.median(data) - Example:
am.median([10, 20, 30])→ 20 - Formula: middle value of sorted data
- Syntax:
-
mode(data)
- Syntax:
am.mode(data) - Example:
am.mode([10, 20, 20, 30])→ 20 - Formula: most frequent value
- Syntax:
-
variance(data)
- Syntax:
am.variance(data) - Example:
am.variance([10, 20, 30])→ 66.67 - Formula: Σ(xᵢ − μ)² ÷ n
- Syntax:
-
std(data)
- Syntax:
am.std(data) - Example:
am.std([10, 20, 30])→ 8.16 - Formula: √variance
- Syntax:
-
data_range(data)
- Syntax:
am.data_range(data) - Example:
am.data_range([10, 20, 30])→ 20 - Formula: max − min
- Syntax:
-
iqr(data)
- Syntax:
am.iqr(data) - Example:
am.iqr([1, 2, 3, 4, 5, 6, 7, 8])→ 4 - Formula: Q3 − Q1
- Syntax:
-
z_score(x, mean, std)
- Syntax:
am.z_score(x, mean, std) - Example:
am.z_score(70, 60, 5)→ 2.0 - Formula: (x − μ) ÷ σ
- Syntax:
-
nCr(n, r)
- Syntax:
am.nCr(n, r) - Example:
am.nCr(5, 2)→ 10 - Formula: n! ÷ (r! × (n − r)!)
- Syntax:
-
nPr(n, r)
- Syntax:
am.nPr(n, r) - Example:
am.nPr(5, 2)→ 20 - Formula: n! ÷ (n − r)!
- Syntax:
-
bayes(PA, PB, PBA)
- Syntax:
am.bayes(PA, PB, PBA) - Example:
am.bayes(0.5, 0.4, 0.7)→ 0.875 - Formula: P(A|B) = (P(B|A) × P(A)) ÷ P(B)
- Syntax:
-
expected_value(values, probs)
- Syntax:
am.expected_value(values, probs) - Example:
am.expected_value([1,2,3],[0.2,0.3,0.5])→ 2.3 - Formula: Σ(xᵢ × pᵢ)
- Syntax:
-
normal_pdf(x, μ, σ)
- Syntax:
am.normal_pdf(x, μ, σ) - Example:
am.normal_pdf(0, 0, 1)→ 0.3989 - Formula: (1 ÷ (σ√2π)) × e^(−(x − μ)² ÷ (2σ²))
- Syntax:
-
normal_cdf(x, μ, σ)
- Syntax:
am.normal_cdf(x, μ, σ) - Example:
am.normal_cdf(0, 0, 1)→ 0.5 - Formula: cumulative distribution of normal
- Syntax:
-
bernoulli_pmf(k, p)
- Syntax:
am.bernoulli_pmf(k, p) - Example:
am.bernoulli_pmf(1, 0.6)→ 0.6 - Formula: pᵏ × (1 − p)^(1 − k)
- Syntax:
-
binomial_pmf(k, n, p)
- Syntax:
am.binomial_pmf(k, n, p) - Example:
am.binomial_pmf(2, 5, 0.5)→ 0.3125 - Formula: (nCr(n, k)) × pᵏ × (1 − p)^(n − k)
- Syntax:
-
poisson_pmf(k, λ)
- Syntax:
am.poisson_pmf(k, λ) - Example:
am.poisson_pmf(3, 2)→ 0.1804 - Formula: (e^(−λ) × λᵏ) ÷ k!
- Syntax:
-
exponential_pdf(x, λ)
- Syntax:
am.exponential_pdf(x, λ) - Example:
am.exponential_pdf(2, 1)→ 0.1353 - Formula: λ × e^(−λx)
- Syntax:
-
uniform_pdf(a, b)
- Syntax:
am.uniform_pdf(a, b) - Example:
am.uniform_pdf(0, 5)→ 0.2 - Formula: 1 ÷ (b − a)
- Syntax:
-
t_pdf(x, ν)
- Syntax:
am.t_pdf(x, ν) - Example:
am.t_pdf(0, 10)→ 0.389 - Formula: Student’s t-distribution formula
- Syntax:
-
chi_square_pdf(x, k)
- Syntax:
am.chi_square_pdf(x, k) - Example:
am.chi_square_pdf(2, 4)→ 0.151 - Formula: Chi-square distribution formula
- Syntax:
import stmath as am
y_true = [1,0,1,1]
y_pred = [1,0,0,1]
print(am.accuracy(y_true, y_pred)) # → 0.75
print(am.precision(y_true, y_pred)) # → 1.0
print(am.recall(y_true, y_pred)) # → 0.66
print(am.f1_score(y_true, y_pred)) # → 0.8-
accuracy(y_true, y_pred)
- Syntax:
am.accuracy(y_true, y_pred) - Example:
am.accuracy([1,0,1,1],[1,0,0,1])→ 0.75 - Formula: (TP + TN) ÷ (TP + TN + FP + FN)
- Syntax:
-
precision(y_true, y_pred)
- Syntax:
am.precision(y_true, y_pred) - Example:
am.precision([1,0,1,1],[1,0,0,1])→ 1.0 - Formula: TP ÷ (TP + FP)
- Syntax:
-
recall(y_true, y_pred)
- Syntax:
am.recall(y_true, y_pred) - Example:
am.recall([1,0,1,1],[1,0,0,1])→ 0.666 - Formula: TP ÷ (TP + FN)
- Syntax:
-
f1_score(y_true, y_pred)
- Syntax:
am.f1_score(y_true, y_pred) - Example:
am.f1_score([1,0,1,1],[1,0,0,1])→ 0.8 - Formula: 2 × (precision × recall) ÷ (precision + recall)
- Syntax:
-
mse(y_true, y_pred)
- Syntax:
am.mse(y_true, y_pred) - Example:
am.mse([1,2,3],[1,2,4])→ 0.333 - Formula: Σ(yᵢ − ŷᵢ)² ÷ n
- Syntax:
-
rmse(y_true, y_pred)
- Syntax:
am.rmse(y_true, y_pred) - Example:
am.rmse([1,2,3],[1,2,4])→ 0.577 - Formula: √MSE
- Syntax:
-
mae(y_true, y_pred)
- Syntax:
am.mae(y_true, y_pred) - Example:
am.mae([1,2,3],[1,2,4])→ 0.333 - Formula: Σ|yᵢ − ŷᵢ| ÷ n
- Syntax:
-
r2_score(y_true, y_pred)
- Syntax:
am.r2_score(y_true, y_pred) - Example:
am.r2_score([1,2,3],[1,2,4])→ 0.9 - Formula: 1 − (Σ(yᵢ − ŷᵢ)² ÷ Σ(yᵢ − ȳ)²)
- Syntax:
-
relu(x)
- Syntax:
am.relu(x) - Example:
am.relu(-5)→ 0 - Formula: max(0, x)
- Syntax:
-
sigmoid(x)
- Syntax:
am.sigmoid(x) - Example:
am.sigmoid(0)→ 0.5 - Formula: 1 ÷ (1 + e^(−x))
- Syntax:
-
tanh(x)
- Syntax:
am.tanh(x) - Example:
am.tanh(0)→ 0.0 - Formula: (e^x − e^(−x)) ÷ (e^x + e^(−x))
- Syntax:
-
softmax(values)
- Syntax:
am.softmax(values) - Example:
am.softmax([1,2,3])→ [0.09, 0.24, 0.66] - Formula: e^(xᵢ) ÷ Σ(e^(xⱼ))
- Syntax:
-
entropy(probs)
- Syntax:
am.entropy(probs) - Example:
am.entropy([0.5,0.5])→ 0.693 - Formula: −Σ(pᵢ × log(pᵢ))
- Syntax:
-
kl_divergence(p, q)
- Syntax:
am.kl_divergence(p, q) - Example:
am.kl_divergence([0.5,0.5],[0.9,0.1])→ 0.51 - Formula: Σ(pᵢ × log(pᵢ ÷ qᵢ))
- Syntax:
-
binary_cross_entropy(y_true, y_pred)
- Syntax:
am.binary_cross_entropy(y_true, y_pred) - Example:
am.binary_cross_entropy([1,0],[0.9,0.1])→ 0.105 - Formula: −[y × log(ŷ) + (1 − y) × log(1 − ŷ)]
- Syntax:
import stmath as am
logits = [2.0, 1.0, 0.1]
print(am.softmax(logits)) # → [0.659, 0.242, 0.099]
print(am.temperature_softmax(logits, T=2)) # smoother distribution
print(am.attention([0.2,0.3,0.5])) # → normalized weights-
logits_to_prob(logits)
- Syntax:
am.logits_to_prob(logits) - Example:
am.logits_to_prob([2.0, 1.0, 0.1])→ [0.659, 0.242, 0.099] - Formula: Convert raw logits → probabilities using softmax normalization
- Syntax:
-
softmax_temperature(logits, T)
- Syntax:
am.softmax_temperature(logits, T) - Example:
am.softmax_temperature([2.0, 1.0, 0.1], T=2.0)→ [0.45, 0.30, 0.25] - Formula: e^(xᵢ/T) ÷ Σ(e^(xⱼ/T))
- Note: Higher T → smoother distribution, Lower T → sharper distribution
- Syntax:
-
attention_scores(weights)
- Syntax:
am.attention_scores(weights) - Example:
am.attention_scores([0.2, 0.3, 0.5])→ [0.2, 0.3, 0.5] (normalized) - Formula: Normalize weights so Σ = 1 (softmax‑style normalization)
- Syntax:
import stmath as am
print(am.sha256("hello"))
# → "2cf24dba5fb0a30e26e83b2ac5b9e29e"
print(am.gas_fee(gas_used=21000, gwei=50, eth_price=2000))
# → 2.1 USD (approx)-
sha256(text)
- Syntax:
am.sha256(text) - Example:
am.sha256("hello")→"2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824" - Formula: SHA‑256 cryptographic hash of input text
- Syntax:
-
gas_fee(gas_used, gwei, eth_price)
- Syntax:
am.gas_fee(gas_used, gwei, eth_price) - Example:
am.gas_fee(21000, 50, 2000)→ 2.1 - Formula: (gas_used × gwei × 1e‑9) × eth_price (in USD)
- Syntax:
import stmath as am
print(am.hadamard([1,0])) # → [0.707, 0.707]
print(am.pauli_x([1,0])) # → [0,1]
print(am.pauli_z([1,0])) # → [1,0]-
hadamard(state)
- Syntax:
am.hadamard(state) - Example:
am.hadamard([1,0])→ [0.707, 0.707] - Formula: H|0⟩ = (|0⟩ + |1⟩) ÷ √2
- Syntax:
-
pauli_x(state)
- Syntax:
am.pauli_x(state) - Example:
am.pauli_x([1,0])→ [0,1] - Formula: X|0⟩ = |1⟩, X|1⟩ = |0⟩
- Syntax:
import stmath as am
adj = {
0: [1,2],
1: [2],
2: [0,3],
3: [3]
}
print(am.bfs_distance(adj, 0)) # → {0:0, 1:1, 2:1, 3:2}
print(am.dijkstra_shortest_path(adj, 0)) # shortest paths-
bfs_distance(adj, start)
- Syntax:
am.bfs_distance(adj, start) - Example:
→
am.bfs_distance({"A":["B","C"],"B":["D"],"C":[],"D":[]}, "A")
{"A":0,"B":1,"C":1,"D":2} - Formula: breadth‑first search distance from start node
- Syntax:
-
dijkstra_shortest_path(adj, start)
- Syntax:
am.dijkstra_shortest_path(adj, start) - Example:
→
am.dijkstra_shortest_path({"A":["B","C"],"B":["C"],"C":[]}, "A")
({"A":0,"B":1,"C":1}, {"A":None,"B":"A","C":"A"}) - Formula: shortest path distances from start node using Dijkstra’s algorithm (default weight = 1)
- Syntax:
import stmath as am
data = [1,2,3,4,5]
print(am.sma(data, 3)) # → [None, None, 2.0, 3.0, 4.0]
print(am.ema(data, 0.5)) # → exponential moving average-
sma(data, window)
- Syntax:
am.sma(data, window) - Example:
am.sma([1,2,3,4,5], 3)→ [2.0, 3.0, 4.0] - Formula: average of last window values
- Syntax:
-
ema(data, alpha)
- Syntax:
am.ema(data, alpha) - Example:
am.ema([1,2,3,4], 0.5)→ [1, 1.5, 2.25, 3.125] - Formula: EMAₜ = αxₜ + (1 − α)EMAₜ₋₁
- Syntax:
import stmath as am
print(am.gcd(48, 18)) # → 6
print(am.lcm(12, 15)) # → 60
print(am.is_prime(29)) # → True
print(am.mod_inverse(3, 11)) # → 4
print(am.fibonacci(10)) # → 55-
gcd(a, b)
- Syntax:
am.gcd(a, b) - Example:
am.gcd(12, 18)→ 6 - Formula: greatest common divisor
- Syntax:
-
lcm(a, b)
- Syntax:
am.lcm(a, b) - Example:
am.lcm(12, 18)→ 36 - Formula: least common multiple
- Syntax:
-
is_prime(n)
- Syntax:
am.is_prime(n) - Example:
am.is_prime(17)→ True - Formula: checks primality
- Syntax:
-
prime_factors(n)
- Syntax:
am.prime_factors(n) - Example:
am.prime_factors(28)→ [2, 2, 7] - Formula: factorization into primes
- Syntax:
-
totient(n)
- Syntax:
am.totient(n) - Example:
am.totient(9)→ 6 - Formula: Euler’s totient function
- Syntax:
-
mod_inverse(a, m)
- Syntax:
am.mod_inverse(a, m) - Example:
am.mod_inverse(3, 11)→ 4 - Formula: a⁻¹ mod m
- Syntax:
-
modular_pow(base, exp, mod)
- Syntax:
am.modular_pow(base, exp, mod) - Example:
am.modular_pow(2, 10, 1000)→ 24 - Formula: (base^exp) mod m
- Syntax:
-
fibonacci(n)
- Syntax:
am.fibonacci(n) - Example:
am.fibonacci(10)→ 55 - Formula: nth Fibonacci number
- Syntax:
-
pell_number(n)
- Syntax:
am.pell_number(n) - Example:
am.pell_number(5)→ 29 - Formula: recurrence Pₙ = 2Pₙ₋₁ + Pₙ₋₂
- Syntax:
-
catalan_number(n)
- Syntax:
am.catalan_number(n) - Example:
am.catalan_number(4)→ 14 - Formula: (1 ÷ (n+1)) × (2n choose n)
- Syntax:
-
divisor_count(n)
- Syntax:
am.divisor_count(n) - Example:
am.divisor_count(12)→ 6 - Formula: number of divisors
- Syntax:
-
divisor_sum(n)
- Syntax:
am.divisor_sum(n) - Example:
am.divisor_sum(12)→ 28 - Formula: sum of divisors
- Syntax:
import stmath as am
print(am.conv2d_output(h=32, w=32, k=3, stride=1, pad=0))
# → (30, 30)
print(am.maxpool_shape(h=32, w=32, k=2, stride=2))
# → (16, 16)
print(am.iou([0,0,10,10], [5,5,15,15]))
# → 0.1428
print(am.nms([[0,0,10,10,0.9],[1,1,9,9,0.8]], threshold=0.5))
# → keep highest confidence box-
conv2d_output_shape(input_shape, kernel, stride, padding)
- Syntax:
am.conv2d_output_shape(input_shape, kernel, stride, padding) - Example:
am.conv2d_output_shape((28,28), (3,3), (1,1), (0,0))→ (26,26) - Formula: ((W − K + 2P) ÷ S + 1, (H − K + 2P) ÷ S + 1)
- Syntax:
-
maxpool_output_shape(input_shape, pool, stride)
- Syntax:
am.maxpool_output_shape(input_shape, pool, stride) - Example:
am.maxpool_output_shape((28,28), (2,2), (2,2))→ (14,14) - Formula: ((W − P) ÷ S + 1, (H − P) ÷ S + 1)
- Syntax:
-
iou(box1, box2)
- Syntax:
am.iou(box1, box2) - Example:
am.iou([0,0,2,2],[1,1,3,3])→ 0.1428 - Formula: intersection area ÷ union area
- Syntax:
-
nms(boxes, threshold)
- Syntax:
am.nms(boxes, threshold) - Example:
am.nms([[0,0,2,2,0.9],[1,1,3,3,0.8]], 0.5)→ [[0,0,2,2,0.9]] - Formula: suppress overlapping boxes above threshold
- Syntax:
import stmath as am
params = [0.5, -0.3]
grads = [0.1, -0.2]
print(am.sgd(params, grads, lr=0.01)) # → updated params
print(am.adam(params, grads, lr=0.01)) # → updated params
print(am.rmsprop(params, grads, lr=0.01)) # → updated params
print(am.cosine_anneal(lr=0.1, step=5, T=10)) # → annealed learning rate-
sgd_update(param, grad, lr)
- Syntax:
am.sgd_update(param, grad, lr) - Example:
am.sgd_update(1.0, 0.1, 0.01)→ 0.999 - Formula: param − lr × grad
- Syntax:
-
adam_update(param, grad, m, v, t, lr, beta1, beta2, eps)
- Syntax:
am.adam_update(param, grad, m, v, t, lr, beta1, beta2, eps) - Example:
→ updated param
am.adam_update(1.0, 0.1, 0, 0, 1, 0.01, 0.9, 0.999, 1e−8)
- Formula: adaptive moment estimation update
- Syntax:
-
rmsprop_update(param, grad, cache, lr, beta, eps)
- Syntax:
am.rmsprop_update(param, grad, cache, lr, beta, eps) - Example:
→ updated param
am.rmsprop_update(1.0, 0.1, 0, 0.01, 0.9, 1e−8)
- Formula: RMSProp update rule
- Syntax:
-
lr_step_decay(lr, step, decay)
- Syntax:
am.lr_step_decay(lr, step, decay) - Example:
am.lr_step_decay(0.1, 10, 0.5)→ 0.05 - Formula: lr × decay^(step)
- Syntax:
-
lr_cosine_anneal(lr, t, T)
- Syntax:
am.lr_cosine_anneal(lr, t, T) - Example:
am.lr_cosine_anneal(0.1, 5, 10)→ 0.05 - Formula: lr × 0.5 × (1 + cos(πt ÷ T))
- Syntax:
-
momentum_update(param, grad, velocity, lr, beta)
- Syntax:
am.momentum_update(param, grad, velocity, lr, beta) - Example:
→ updated param
am.momentum_update(1.0, 0.1, 0, 0.01, 0.9)
- Formula: momentum gradient descent update
- Syntax:
import stmath as am
print(am.simple_interest(p=1000, r=5, t=2)) # → 100.0
print(am.compound_interest(p=1000, r=5, t=2)) # → 102.5
print(am.emi(principal=500000, rate=7.5, years=20)) # → monthly EMI
print(am.future_value(p=1000, r=10, t=5)) # → 1610.51-
simple_interest(principal, rate, time)
- Syntax:
am.simple_interest(principal, rate, time) - Example:
am.simple_interest(1000, 5, 2)→ 100.0 - Formula: (P × R × T) ÷ 100
- Syntax:
-
compound_interest(principal, rate, time)
- Syntax:
am.compound_interest(principal, rate, time) - Example:
am.compound_interest(1000, 5, 2)→ 102.5 - Formula: P × (1 + R ÷ 100)^T − P
- Syntax:
-
loan_emi(principal, rate, time)
- Syntax:
am.loan_emi(principal, rate, time) - Example:
am.loan_emi(100000, 10, 12)→ 8791.59 - Formula: [P × R × (1+R)^T] ÷ [(1+R)^T − 1]
- Syntax:
import stmath as am
print(am.percent(50, 200)) # → 25.0
print(am.percent_change(100, 120)) # → 20.0
print(am.profit_percent(100, 120)) # → 20.0 -
profit_percent(cost, selling)
- Syntax:
am.profit_percent(cost, selling) - Example:
am.profit_percent(100, 120)→ 20.0 - Formula: ((SP − CP) ÷ CP) × 100
- Syntax:
-
loss_percent(cost, selling)
- Syntax:
am.loss_percent(cost, selling) - Example:
am.loss_percent(100, 80)→ 20.0 - Formula: ((CP − SP) ÷ CP) × 100
- Syntax:
-
avg_speed(distance1, speed1, distance2, speed2)
- Syntax:
am.avg_speed(distance1, speed1, distance2, speed2) - Example:
am.avg_speed(60, 30, 60, 60)→ 40.0 - Formula: total distance ÷ total time
- Syntax:
import stmath as am
# Time performance of a function
print(am.timeit(lambda: am.add(10, 5)))
# Memory usage of a function
print(am.mem_profile(lambda: am.mul(1000, 2000)))-
*timeit(func, args)
- Syntax:
am.timeit(func, *args) - Example:
am.timeit(sum, [1,2,3])→ execution time - Formula: measures runtime of function
- Syntax:
-
*mem_profile(func, args)
- Syntax:
am.mem_profile(func, *args) - Example:
am.mem_profile(sum, [1,2,3])→ memory usage - Formula: measures memory usage of function
- Syntax:
- Unified Math + AI + Quantum + Crypto + Vision toolkit
- Handbook-style API documentation with syntax, examples, and formulas
- Designed for educational clarity and reproducible experimentation
- Modular domain-wise architecture
- Lightweight dependency footprint
MIT — free for personal, academic, and commercial use.
STMATH follows a handbook-style documentation approach — every function includes syntax, example, and formula.
Full documentation will be hosted soon on GitHub Pages or ReadTheDocs.
For now, explore examples in this README or use help(am.function_name).
Coming soon:
- Domain-wise docs with emojis and bilingual support
- Visual examples for GenAI, Quantum, and Vision
- Test coverage and
safe_runwrappers
STMATH welcomes contributions from developers, educators, and researchers.
To contribute:
- Fork the repo
- Create a feature branch
- Add tests and examples
- Submit a pull request
Please follow the CONTRIBUTING.md guide for standards and structure.
You can also:
- Open issues for bugs or feature requests
- Share educational use-cases or notebooks
- Help improve bilingual documentation
STMATH is built by Saksham Tomar, a Python developer and open-source educator.
It aims to unify math, AI, and GenAI tooling into a single, clean, and reusable Python library.
Goals:
- Make math and AI accessible to learners
- Provide reusable functions for research and education
- Maintain professional publishing standards (PyPI + GitHub)
- Support bilingual documentation (English + Hindi)
If you use STMATH in your project or classroom, feel free to share and star the repo!