A robust, high-level PostgreSQL database abstraction layer built on psycopg3 with connection pooling, query building, and context manager support.
Philosophy:
Database access should be simple, safe, and Pythonic.
ZNPG was born from frustration with psycopg2's verbosity and SQLAlchemy's complexity. We believe database libraries should:
- Get out of your way with clean, intuitive APIs
- Protect you from yourself with safe defaults
- Scale with your needs from scripts to production apps
Built by a developer who got tired of boilerplate. Used by 250+ developers who felt the same.
- Fixed transaction support — all write methods now accept an optional
connparameter, making atomic multi-operation transactions actually reliable - All CRUD methods (
insert,update,delete,bulk_insert,select,fetch_one,query,execute) now supportconnpassthrough
- Zero-boilerplate — from import to query in 3 lines
- Safe by default — no unsafe DELETE/UPDATE without explicit flags
- Built-in pooling — connection pooling that just works
- Full CRUD — high-level operations for 95% of use cases
- Raw SQL access — escape hatch when you need full control
- Type-hinted — modern Python with complete type hints
- Production-ready — connection health checks, stats, and maintenance ops
- JSON native — built-in import/export for data portability
- True atomic transactions — pass
conninto any method to run it inside a transaction
- Overview
- Installation
- Quick Start
- Core Classes
- Connection Management
- CRUD Operations
- Transaction Support
- DDL Operations
- Utility Methods
- Error Handling
- Best Practices
- API Reference
ZNPG provides a Pythonic interface to PostgreSQL databases with the following features:
- Connection Pooling: Built-in
psycopg_poolintegration for efficient connection management - Context Managers: Safe resource handling with
withstatements - CRUD Abstractions: High-level methods for common database operations
- Query Builder Integration: Seamless SQL generation via
QueryBuilderclass - Type Safety: Full type hints and generic support
- Atomic Transactions: Pass
conninto any method to guarantee all-or-nothing execution - JSON Export/Import: Native support for data serialization
pip install znpgDependencies:
psycopg(v3.x)psycopg-pool- Python 3.8+
from znpg import Database
# Initialize with connection string
db = Database(min_size=2, max_size=10)
db.url_connect("postgresql://user:pass@localhost:5432/mydb")
# Or use manual connection parameters
db.manual_connect(
username="user",
password="pass",
host="localhost",
port=5432,
db_name="mydb"
)
# Simple query
users = db.query("SELECT * FROM users WHERE age > %s", [18])
# Using context manager (recommended)
with Database() as db:
db.url_connect("postgresql://user:pass@localhost/db")
result = db.select("users", where={"status": "active"})The primary interface for all database operations.
Database(
min_size: int = 1, # Minimum connections in pool
max_size: int = 10, # Maximum connections in pool
timeout: int = 30 # Connection timeout in seconds
)| Attribute | Type | Description |
|---|---|---|
pool |
Optional[ConnectionPool] |
The underlying connection pool instance |
min_size |
int |
Minimum number of connections maintained |
max_size |
int |
Maximum number of connections allowed |
is_connected |
bool |
Connection status flag |
timeout |
int |
Connection acquisition timeout |
Connect using a PostgreSQL connection URI.
db = Database()
db.url_connect("postgresql://admin:[email protected]:5432/production")Connect using individual parameters.
db.manual_connect(
username="postgres",
host="localhost",
password="secure_pass",
db_name="myapp",
port=5432
)Context manager for raw connection access.
with db.get_connection() as conn:
with conn.cursor() as cur:
cur.execute("SELECT version()")
print(cur.fetchone())Check database connectivity.
if db.is_healthy():
print("Database connection is active")Retrieve connection pool statistics.
stats = db.stats()
# Returns: {"size": 5, "available": 3, "used": 2}Explicitly close the connection pool.
db.close() # Called automatically when using context managersAll write methods and most read methods accept an optional conn parameter. When provided, the method runs on that connection instead of grabbing one from the pool. This is how you achieve atomic transactions — see Transaction Support.
# without conn — grabs its own connection, auto-commits
db.insert("users", {"name": "Alice"})
# with conn — runs inside your transaction, no auto-commit
with db.transaction() as conn:
db.insert("users", {"name": "Alice"}, conn=conn)Execute raw SQL and return results as a list of dicts.
results = db.query(
"SELECT * FROM orders WHERE status = %s AND amount > %s",
["pending", 100.00]
)
# Returns: [{"id": 1, "status": "pending", "amount": 150.00}, ...]Fetch a single record or None.
user = db.fetch_one("SELECT * FROM users WHERE email = %s", ["[email protected]"])
if user:
print(user["name"])High-level SELECT with QueryBuilder.
# Basic select
users = db.select("users")
# With filters
active_users = db.select(
table="users",
columns=["id", "name", "email"],
where={"status": "active", "verified": True},
order_by=["created_at DESC"],
limit=10
)Fetch a record by primary key.
user = db.get_by_id("users", "user_id", 42)Count records with optional filtering.
total = db.count("orders")
pending_count = db.count("orders", where={"status": "pending"})Check if any records matching the criteria exist.
has_admin = db.exists("users", {"role": "admin"})Execute raw SQL (INSERT, UPDATE, DELETE). Returns rowcount.
rows_deleted = db.execute("DELETE FROM logs WHERE created_at < %s", ["2023-01-01"])Insert a single record.
success = db.insert("users", {
"name": "Jane Doe",
"email": "[email protected]",
"created_at": "2024-01-15"
})Update records with safety checks.
# Safe update with WHERE clause
rows_updated = db.update(
table="users",
data={"last_login": "2024-01-15"},
conditions={"id": 42}
)
# Update all records (requires explicit flag)
db.update("users", {"status": "inactive"}, allow_all=True)Delete records with safety checks.
# Safe delete
deleted = db.delete("sessions", {"expired": True})
# Delete all (requires explicit flag)
db.delete("logs", allow_deleteall=True)Efficient batch insertion.
users = [
{"name": "Alice", "email": "[email protected]"},
{"name": "Bob", "email": "[email protected]"}
]
inserted = db.bulk_insert("users", users, on_conflict="DO NOTHING")Yields a connection for atomic multi-operation transactions. Automatically commits on success and rolls back on any exception.
v1.3.0 fix: pass the yielded conn into any method to guarantee they all run on the same connection — making rollbacks actually work.
# Atomic checkout: sale recorded + inventory updated or neither happens
try:
with db.transaction() as conn:
db.insert("sales", {"product_id": "abc", "total": 29.99}, conn=conn)
db.update("inventory", {"quantity": 11}, {"product_id": "abc"}, conn=conn)
# both succeed → auto-commit
except Exception as e:
# either one fails → both roll back
logger.error(f"Checkout failed, transaction rolled back: {e}")Without conn (old broken behavior):
# DON'T do this — each call grabs its own connection, rollback won't affect them
with db.transaction() as conn:
db.insert("sales", {...}) # connection A
db.update("inventory", {...}) # connection B — not part of the transaction!db.create_table("products", {
"id": "SERIAL PRIMARY KEY",
"name": "VARCHAR(255) NOT NULL",
"price": "DECIMAL(10,2)",
"created_at": "TIMESTAMP DEFAULT CURRENT_TIMESTAMP"
})db.drop_table("temp_table", allow_action=True)
db.drop_table("orders", cascade=True, allow_action=True)db.truncate("logs")if db.table_exists("migrations"):
print("Already set up")columns = db.get_table_columns("users")
# Returns: ['id', 'name', 'email', 'created_at']db.create_index("users", ["email"], unique=True)
db.create_index("orders", ["user_id", "created_at"])db.vacuum() # full database
db.vacuum("large_table", analyze=True)data = db.select("users")
Database.export_to_json("backup.json", data)data = Database.import_from_json("config.json")
db.bulk_insert("settings", data)conn_str = Database.c_string("user", "localhost", "pass", "db", 5432)
# Returns: "postgresql://user:pass@localhost:5432/db"All operations catch psycopg.Error and log via the configured logger. Methods return safe defaults on failure:
| Method | Failure Return |
|---|---|
select() |
[] |
insert() |
False |
update() |
0 |
delete() |
0 |
create_table() |
False |
drop_table() |
False |
truncate() |
False |
table_exists() |
None |
bulk_insert() |
0 |
get_table_columns() |
None |
get_by_id() |
None |
count() |
None |
exists() |
None |
Connection failures are re-raised after logging.
# good
with Database() as db:
db.url_connect(conn_str)
data = db.select("users")
# risky — pool may not close if an exception occurs
db = Database()
db.url_connect(conn_str)# safe
db.query("SELECT * FROM users WHERE id = %s", [user_id])
# never do this — SQL injection risk
db.query(f"SELECT * FROM users WHERE id = {user_id}")with db.transaction() as conn:
db.insert("orders", order_data, conn=conn) # correct
db.update("inventory", inv_data, conn=conn) # correcttry:
db.url_connect(conn_str)
except Exception as e:
logger.critical(f"Database connection failed: {e}")
raise SystemExit(1)# raises ValueError — missing WHERE
db.update("users", {"role": "admin"})
# works
db.update("users", {"role": "admin"}, allow_all=True)stats = db.stats()
if stats["available"] / stats["size"] < 0.2:
logger.warning("Pool running low on connections")| Method | Returns | Description |
|---|---|---|
url_connect(conn_string) |
None |
Connect via URI |
manual_connect(...) |
None |
Connect via params |
get_connection() |
ContextManager |
Raw connection access |
transaction() |
ContextManager |
Atomic transaction context |
query(sql, params, conn) |
List[Dict] |
Raw SQL query |
execute(sql, params, conn) |
int |
Raw SQL execution |
fetch_one(sql, params, conn) |
Optional[Dict] |
Single record fetch |
select(..., conn) |
List[Dict] |
High-level SELECT |
insert(table, data, conn) |
bool |
Insert record |
update(table, data, conditions, allow_all, conn) |
int |
Update records |
delete(table, conditions, allow_deleteall, conn) |
int |
Delete records |
bulk_insert(table, data, on_conflict, conn) |
int |
Batch insert |
create_table(table, columns) |
bool |
Create table |
drop_table(table, cascade, allow_action) |
bool |
Drop table |
truncate(table) |
bool |
Truncate table |
table_exists(table) |
Optional[bool] |
Check existence |
get_table_columns(table) |
Optional[List[str]] |
Get columns |
get_by_id(table, id_name, id) |
List[Dict] |
Fetch by PK |
count(table, where) |
Optional[int] |
Count records |
exists(table, where) |
Optional[bool] |
Check existence |
create_index(table, columns, unique) |
int |
Create index |
vacuum(table, analyze) |
int |
Run VACUUM |
is_healthy() |
bool |
Health check |
stats() |
Dict |
Pool statistics |
close() |
None |
Close pool |
export_to_json(file, data, indent) |
bool |
Static: export JSON |
import_from_json(file) |
Union[dict, bool] |
Static: import JSON |
c_string(...) |
str |
Static: build conn string |
Version: 1.3.0 License: MIT Python Support: 3.8+ PostgreSQL: 12+ Author: ZN-0X
For issues and contributions, visit the repository.