A lightweight PostgreSQL wrapper for Python with connection pooling and a clean API.
Working with psycopg directly involves a lot of boilerplate. znpg removes the repetition while maintaining the flexibility of raw SQL when you need it.
Key features:
- Built-in connection pooling
- Simple CRUD operations
- Bulk insert support
- Transaction management
- SQL injection protection
- Type hints throughout
pip install znpgfrom znpg import Database
# Connect to database
db = Database()
db.url_connect("postgresql://user:password@localhost:5432/dbname")
# Create table
db.create_table('users', {
'id': 'SERIAL PRIMARY KEY',
'name': 'VARCHAR(100) NOT NULL',
'email': 'VARCHAR(255) UNIQUE',
'created_at': 'TIMESTAMP DEFAULT CURRENT_TIMESTAMP'
})
# Insert data
db.insert('users', {
'name': 'John Doe',
'email': '[email protected]'
})
# Query data
users = db.select('users', where={'name': 'John Doe'})
print(users) # [{'id': 1, 'name': 'John Doe', 'email': '[email protected]', ...}]
# Close connection
db.close()The recommended way to use znpg is with a context manager, which automatically handles connection cleanup:
from znpg import Database
with Database() as db:
db.url_connect("postgresql://user:password@localhost:5432/dbname")
# Your database operations
users = db.select('users')
# Connection pool automatically closedUsing URL string:
db = Database()
db.url_connect("postgresql://user:password@localhost:5432/dbname")Using individual parameters:
db = Database()
db.manual_connect(
username="user",
password="password",
host="localhost",
db_name="dbname",
port=5432
)Select all:
users = db.select('users')Select with conditions:
users = db.select('users', where={'active': True})Select specific columns:
users = db.select('users', columns=['name', 'email'])With ordering and limit:
users = db.select('users',
where={'active': True},
order_by='created_at DESC',
limit=10
)Single row:
db.insert('users', {
'name': 'Jane Smith',
'email': '[email protected]'
})Multiple rows (bulk insert):
db.bulk_insert('users', [
{'name': 'Alice', 'email': '[email protected]'},
{'name': 'Bob', 'email': '[email protected]'},
{'name': 'Charlie', 'email': '[email protected]'}
])Bulk insert is significantly faster for large datasets.
Update with conditions:
db.update('users',
data={'active': False},
conditions={'email': '[email protected]'}
)Update all rows (requires explicit permission):
db.update('users',
data={'verified': True},
allow_all=True
)Delete with conditions:
db.delete('users', conditions={'active': False})Delete all (requires explicit permission):
db.delete('users', allow_deleteall=True)db.create_table('products', {
'id': 'SERIAL PRIMARY KEY',
'name': 'VARCHAR(200) NOT NULL',
'price': 'DECIMAL(10, 2)',
'stock': 'INTEGER DEFAULT 0',
'created_at': 'TIMESTAMP DEFAULT CURRENT_TIMESTAMP'
})db.drop_table('old_table', allow_action=True)With cascade:
db.drop_table('parent_table', cascade=True, allow_action=True)if db.table_exists('users'):
print("Table exists")columns = db.get_table_columns('users')
print(columns) # ['id', 'name', 'email', 'created_at']db.truncate('logs')total_users = db.count('users')
active_users = db.count('users', where={'active': True})exists = db.exists('users', {'email': '[email protected]'})user = db.get_by_id('users', 'id', 123)For operations that need to be atomic:
with db.transaction() as conn:
cursor = conn.cursor()
cursor.execute("UPDATE accounts SET balance = balance - 100 WHERE id = 1")
cursor.execute("UPDATE accounts SET balance = balance + 100 WHERE id = 2")
# Automatically commits on success, rolls back on errorWhen you need full control, you can execute raw SQL:
# Query with results
results = db.query("SELECT * FROM users WHERE age > %s", [18])
# Execute without results
rows_affected = db.execute("DELETE FROM logs WHERE created_at < %s", ['2023-01-01'])
# Fetch single row
user = db.fetch_one("SELECT * FROM users WHERE id = %s", [123])znpg includes safety checks for destructive operations:
UPDATE without WHERE clause:
# This will raise ValueError
db.update('users', {'active': False})
# Must explicitly allow
db.update('users', {'active': False}, allow_all=True)DELETE without WHERE clause:
# This will raise ValueError
db.delete('users')
# Must explicitly allow
db.delete('users', allow_deleteall=True)DROP TABLE requires confirmation:
# This will raise AuthorizationError
db.drop_table('important_table')
# Must explicitly allow
db.drop_table('important_table', allow_action=True)znpg uses connection pooling by default (1-10 connections). This means:
- Connections are reused across operations
- Better performance under load
- Automatic connection management
- Thread-safe operations
You don't need to manage connections manually - the pool handles everything.
- Python 3.7+
- psycopg 3.0+
- psycopg-pool 3.0+
Bulk insert performance test (69 rows):
- Traditional loop insert: ~15-20 seconds
- znpg bulk_insert: <5 seconds
For data pipelines and ETL operations, bulk_insert provides significant performance improvements.
All methods include error handling and return sensible defaults:
# Returns empty list on error
users = db.select('nonexistent_table') # []
# Returns False on error
success = db.insert('users', {'invalid': 'data'}) # False
# Returns 0 on error
count = db.count('nonexistent_table') # NoneErrors are printed to console for debugging.
MIT License - see LICENSE file for details.
Contributions are welcome. Please open an issue first to discuss proposed changes.
Built by Zain, a 17-year-old developer from Pakistan.
- Initial release
- Core CRUD operations
- Connection pooling
- Bulk insert support
- Table management
- Transaction support
- Safety checks for destructive operations