Thanks to visit codestin.com
Credit goes to Github.com

Skip to content

briangu/klongpy

Repository files navigation

Unit Tests Last Commit Dependency Status Open Issues Repo Size GitHub star chart

Release Notes Downloads License: MIT

KlongPy: A High-Performance Array Language with Autograd

KlongPy is a Python adaptation of the Klong array language, offering high-performance vectorized operations. It prioritizes compatibility with Python, thus allowing seamless integration of Python's expansive ecosystem while retaining Klong's succinctness.

KlongPy backends include NumPy and optional PyTorch (CPU, CUDA, and Apple MPS). When PyTorch is enabled, automatic differentiation (autograd) is supported; otherwise, numeric differentiation is the default.

Full documentation: https://klongpy.org

New to v0.7.0, KlongPy now brings gradient-based programming to an already-succinct array language, so you can differentiate compact array expressions directly. It's also a batteries-included system with IPC, DuckDB-backed database tooling, web/websocket support, and other integrations exposed seamlessly from the language.

Backends include NumPy and optional PyTorch (CPU, CUDA, and Apple MPS). When PyTorch is enabled, gradients use autograd; otherwise numeric differentiation is the default.

PyTorch gradient descent (10+ lines):

import torch
x = torch.tensor(5.0, requires_grad=True)
optimizer = torch.optim.SGD([x], lr=0.1)
for _ in range(100):
    loss = x ** 2
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
print(x)  # ~0

KlongPy gradient descent (2 lines):

f::{x^2}; s::5.0
{s::s-(0.1*f:>s)}'!100   :" s -> 0"

Array languages like APL, K, and Q revolutionized finance by treating operations as data transformations, not loops. KlongPy brings this philosophy to machine learning: gradients become expressions you compose, not boilerplate you maintain. The result is a succint mathematical-like notation that is automatically extended to machine learning.

Quick Install

# REPL + NumPy backend (pick one option below)
pip install "klongpy[repl]"
kgpy

# Enable torch backend (autograd + GPU)
pip install "klongpy[torch]"
kgpy --backend torch

# Everything (web, db, websockets, torch, repl)
pip install "klongpy[all]"

REPL

$ kgpy
Welcome to KlongPy REPL v0.7.0
Author: Brian Guarraci
Web: http://klongpy.org
Backend: torch (mps)
]h for help; Ctrl-D or ]q to quit

$>

Why KlongPy?

For Quants and Traders

Optimize portfolios with gradients in a language designed for arrays:

:" Portfolio optimization: gradient of Sharpe ratio"
returns::[0.05 0.08 0.03 0.10]      :" Annual returns per asset"
vols::[0.15 0.20 0.10 0.25]         :" Volatilities per asset"
w::[0.25 0.25 0.25 0.25]            :" Portfolio weights"

sharpe::{(+/x*returns)%((+/((x^2)*(vols^2)))^0.5)}
sg::sharpe:>w                       :" Gradient of Sharpe ratio"
.d("sharpe gradient="); .p(sg)
sharpe gradient=[0.07257738709449768 0.032256484031677246 0.11693036556243896 -0.22176480293273926]

For ML Researchers

Neural networks in pure array notation:

:" Single-layer neural network with gradient descent"
.bkf(["exp"])
sigmoid::{1%(1+exp(0-x))}
forward::{sigmoid((w1*x)+b1)}
X::[0.5 1.0 1.5 2.0]; Y::[0.2 0.4 0.6 0.8]
w1::0.1; b1::0.1; lr::0.1
loss::{+/((forward'X)-Y)^2}

:" Train with multi-param gradients"
{grads::loss:>[w1 b1]; w1::w1-(lr*grads@0); b1::b1-(lr*grads@1)}'!1000
.d("w1="); .d(w1); .d(" b1="); .p(b1)
w1=1.74 b1=-2.17

For Scientists

Express mathematics directly:

:" Gradient of f(x,y,z) = x^2 + y^2 + z^2 at [1,2,3]"
f::{+/x^2}
f:>[1 2 3]
[2.0 4.0 6.0]

The Array Language Advantage

Array languages express what you want, not how to compute it. This enables automatic optimization:

Operation Python KlongPy
Sum an array sum(a) +/a
Running sum np.cumsum(a) +\a
Dot product np.dot(a,b) +/a*b
Average sum(a)/len(a) (+/a)%#a
Gradient 10+ lines f:>x
Multi-param grad 20+ lines loss:>[w b]
Jacobian 15+ lines x∂f
Optimizer 10+ lines {w::w-(lr*f:>w)}

KlongPy inherits from the APL family tree (APL → J → K/Q → Klong), adding Python integration and automatic differentiation.

Performance: NumPy vs PyTorch Backend

The PyTorch backend provides significant speedups for large arrays with GPU acceleration (RTX 4090 in this case):

$ python3 tests/perf_vector.py
============================================================
VECTOR OPS (element-wise, memory-bound)
  Size: 10,000,000 elements, Iterations: 100
============================================================
NumPy (baseline)                    0.021854s
KlongPy (numpy)                     0.001413s  (15.46x vs NumPy)
KlongPy (torch, cpu)                0.000029s  (761.22x vs NumPy)
KlongPy (torch, cuda)               0.000028s  (784.04x vs NumPy)

============================================================
MATRIX MULTIPLY (compute-bound, GPU advantage)
  Size: 4000x4000, Iterations: 5
============================================================
NumPy (baseline)                    0.078615s
KlongPy (numpy)                     0.075400s  (1.04x vs NumPy)
KlongPy (torch, cpu)                0.077350s  (1.02x vs NumPy)
KlongPy (torch, cuda)               0.002339s  (33.62x vs NumPy)

Also supporting Apple Silicon MPS (M1 Mac Studio) enables fast local work:

$ python tests/perf_backend.py --compare
Benchmark                   NumPy (ms)   Torch (ms)      Speedup
----------------------------------------------------------------------
vector_add_1M                    0.327        0.065    5.02x (torch)
compound_expr_1M                 0.633        0.070    9.00x (torch)
sum_1M                           0.246        0.087    2.84x (torch)
grade_up_100K                    0.588        0.199    2.96x (torch)
enumerate_1M                     0.141        0.050    2.83x (torch)

Complete Feature Set

KlongPy is a batteries-included platform with kdb+/Q-inspired features:

Core Language

  • Vectorized Operations: NumPy/PyTorch-powered bulk array operations
  • Automatic Differentiation: Native :> operator for exact gradients
  • GPU Acceleration: CUDA and Apple MPS support via PyTorch
  • Python Integration: Import any Python library with .py() and .pyf()

Data Infrastructure (kdb+/Q-like)

Documentation

Full documentation: https://briangu.github.io/klongpy

Typing Special Characters

KlongPy uses Unicode operators for mathematical notation. Here's how to type them:

Symbol Name Mac Windows Description
Nabla Option + v then select, or Character Viewer Alt + 8711 (numpad) Numeric gradient
Partial Option + d Alt + 8706 (numpad) Jacobian operator

Mac Tips:

  • Option + d types directly
  • For , open Character Viewer with Ctrl + Cmd + Space, search "nabla"
  • Or simply copy-paste:

Alternative: Use the function equivalents that don't require special characters:

3∇f           :" Using nabla"
.jacobian(f;x) :" Instead of x∂f"

Syntax Cheat Sheet

Functions take up to 3 parameters, always named x, y, z:

:" Operators (right to left evaluation)"
5+3*2           :" 11 (3*2 first, then +5)"
+/[1 2 3]       :" 6  (sum: + over /)"
*/[1 2 3]       :" 6  (product: * over /)"
#[1 2 3]        :" 3  (length)"
3|5             :" 5  (max)"
3&5             :" 3  (min)"

:" Functions"
avg::{(+/x)%#x}         :" Monad (1 arg)"
dot::{+/x*y}            :" Dyad (2 args)"
clip::{(x|y)&z}         :" Triad (3 args): min(max(x,y),z)"

:" Adverbs (modifiers)"
f::{x^2}
f'[1 2 3]               :" Each: apply f to each -> [1 4 9]"
+/[1 2 3]               :" Over: fold/reduce -> 6"
+\[1 2 3]               :" Scan: running fold -> [1 3 6]"

:" Autograd"
f::{x^2}
3∇f                     :" Numeric gradient at x=3 -> ~6.0"
f:>3                    :" Autograd (exact with torch) at x=3 -> 6.0"
f::{+/x^2}             :" Redefine f as sum-of-squares"
f:>[1 2 3]              :" Gradient -> [2 4 6]"

:" Multi-parameter gradients"
w::2.0; b::3.0
loss::{(w^2)+(b^2)}
loss:>[w b]             :" Gradients for both -> [4.0 6.0]"

:" Jacobian (for vector functions)"
g::{x^2}                :" Element-wise square"
[1 2]∂g                 :" Jacobian matrix -> [[2 0] [0 4]]"

Examples

1. Basic Array Operations

?> a::[1 2 3 4 5]
[1 2 3 4 5]
?> a*a                    :" Element-wise square"
[1 4 9 16 25]
?> +/a                    :" Sum"
15
?> (*/a)                  :" Product"
120
?> avg::{(+/x)%#x}        :" Define average"
:monad
?> avg(a)
3.0

2. Gradient Descent

Minimize f(x) = (x-3)^2

(with PyTorch's autograd)

$ rlwrap kgpy --backend torch
?> f::{(x-3)^2}
:monad
?> s::10.0; lr::0.1
0.1
?> {s::s-(lr*f:>s); s}'!10
[8.600000381469727 7.4800004959106445 6.584000587463379 5.8672003746032715 5.293760299682617 4.835008144378662 4.468006610870361 4.174405097961426 3.9395241737365723 3.751619338989258]

(Numerical differentiation)

$ rlwrap kgpy
?> f::{(x-3)^2}
:monad
?> s::10.0; lr::0.1
0.1
?> {s::s-(lr*f:>s); s}'!10
[8.60000000104776 7.480000001637279 6.584000001220716 5.867200000887465 5.2937600006031005 4.835008000393373 4.4680064002611175 4.174405120173077 3.939524096109306 3.7516192768605094]

3. Linear Regression

:" Data: y = 2*x + 3 + noise"
X::[1 2 3 4 5]
Y::[5.1 6.9 9.2 10.8 13.1]

:" Model parameters"
w::0.0; b::0.0

:" Loss function"
mse::{(+/(((w*X)+b)-Y)^2)%#X}

:" Train with multi-parameter gradients"
lr::0.01
{grads::mse:>[w b]; w::w-(lr*grads@0); b::b-(lr*grads@1)}'!1000

.d("Learned: w="); .d(w); .d(" b="); .p(b)
Learned: w=2.01 b=2.97

4. Database Operations

?> .py("klongpy.db")
?> t::.table([[\"name\" [\"Alice\" \"Bob\" \"Carol\"]] [\"age\" [25 30 35]]])
name  age
Alice  25
Bob    30
Carol  35
?> db::.db(:{},\"T\",t)
?> db(\"SELECT * FROM T WHERE age > 27\")
name  age
Bob    30
Carol  35

5. IPC: Distributed Computing

Server:

?> avg::{(+/x)%#x}
:monad
?> .srv(8888)
1

Client:

?> f::.cli(8888)              :" Connect to server"
remote[localhost:8888]:fn
?> myavg::f(:avg)             :" Get remote function reference"
remote[localhost:8888]:fn:avg:monad
?> myavg(!1000000)            :" Execute on server"
499999.5

6. Web Server

.py("klongpy.web")
data::!10
index::{x; "Hello from KlongPy! Data: ",data}
get:::{}; get,"/",index
post:::{}
h::.web(8888;get;post)
.p("Server ready at http://localhost:8888")
$ curl http://localhost:8888
['Hello from KlongPy! Data: ' 0 1 2 3 4 5 6 7 8 9]

Installation Options

Basic Runtime (NumPy only)

pip install klongpy

REPL Support

pip install "klongpy[repl]"

With PyTorch Autograd (Recommended)

pip install "klongpy[torch]"
kgpy --backend torch          # Enable torch backend

Web / DB / WebSockets Extras

pip install "klongpy[web]"
pip install "klongpy[db]"
pip install "klongpy[ws]"

Full Installation (REPL, DB, Web, WebSockets, Torch)

pip install "klongpy[all]"

Lineage and Inspiration

KlongPy stands on the shoulders of giants:

  • APL (1966): Ken Iverson's revolutionary notation
  • J: ASCII-friendly APL successor
  • K/Q/kdb+: High-performance time series and trading systems
  • Klong: Nils M Holm's elegant, accessible array language
  • NumPy: The "Iverson Ghost" in Python's scientific stack
  • PyTorch: Automatic differentiation and GPU acceleration

KlongPy combines Klong's simplicity with Python's ecosystem and PyTorch's autograd creating something new: an array language where gradients are first-class citizens.

Use Cases

  • Quantitative Finance: Self-optimizing trading strategies, risk models, portfolio optimization
  • Machine Learning: Neural networks, gradient descent, optimization in minimal code
  • Scientific Computing: Physics simulations, numerical methods, data analysis
  • Time Series Analysis: Signal processing, feature engineering, streaming data
  • Rapid Prototyping: Express complex algorithms in few lines, then optimize

Status

KlongPy is a superset of the Klong array language, passing all Klong integration tests plus additional test suites. The PyTorch backend provides GPU acceleration (CUDA, MPS) and automatic differentiation.

Ongoing development:

  • Expanded torch backend coverage
  • Additional built-in tools and integrations
  • Improved error messages and debugging

Related Projects

Development

git clone https://github.com/briangu/klongpy.git
cd klongpy
pip install -e ".[dev]"   # Install in editable mode with dev dependencies
python3 -m pytest tests/  # Run tests

Issues

This project does not accept direct issue submissions.

Please start with a GitHub Discussion. Maintainers will promote validated discussions to Issues.

Active contributors may be invited to open issues directly.

Contributors

See CONTRIBUTING.md for contribution workflow, discussion-first policy, and code standards.

Documentation

# Install docs tooling
pip install -e ".[docs]"

# Build the site into ./site
mkdocs build

# Serve locally with live reload
mkdocs serve

Acknowledgements

Huge thanks to Nils M Holm for creating Klong and writing the Klong Book, which made this project possible.

About

High-Performance Klong array language in Python.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages