A powerful command-line tool for data processing and analysis
undatum (pronounced un-da-tum) is a modern CLI tool designed to make working with large datasets as simple and efficient as possible. It provides a unified interface for converting, analyzing, validating, and transforming data across multiple formats.
- Multi-format support: CSV, JSON Lines, BSON, XML, XLS, XLSX, Parquet, AVRO, ORC
- Compression support: ZIP, XZ, GZ, BZ2, ZSTD
- Low memory footprint: Streams data for efficient processing of large files
- Automatic detection: Encoding, delimiters, and file types
- Data validation: Built-in rules for emails, URLs, and custom validators
- Advanced statistics: Field analysis, frequency calculations, and date detection
- Flexible filtering: Query and filter data using expressions
- Schema generation: Automatic schema detection and generation
- AI-powered documentation: Automatic field and dataset descriptions using multiple LLM providers (OpenAI, OpenRouter, Ollama, LM Studio, Perplexity) with structured JSON output
pip install --upgrade pip setuptools
pip install undatumDependencies are declared in pyproject.toml and will be installed automatically by modern versions of pip (23+). If you see missing-module errors after installation, upgrade pip and retry.
- Python 3.8 or greater
python -m pip install --upgrade pip setuptools wheel
python -m pip install .
# or build distributables
python setup.py sdist bdist_wheel# Get file headers
undatum headers data.jsonl
# Analyze file structure
undatum analyze data.jsonl
# Get statistics
undatum stats data.csv
# Convert XML to JSON Lines
undatum convert --tagname item data.xml data.jsonl
# Get unique values
undatum uniq --fields category data.jsonl
# Calculate frequency
undatum frequency --fields status data.csvAnalyzes data files and provides human-readable insights about structure, encoding, fields, and data types. With --autodoc, automatically generates field descriptions and dataset summaries using AI.
# Basic analysis
undatum analyze data.jsonl
# With AI-powered documentation
undatum analyze data.jsonl --autodoc
# Using specific AI provider
undatum analyze data.jsonl --autodoc --ai-provider openai --ai-model gpt-4o-mini
# Output to file
undatum analyze data.jsonl --output report.yaml --autodocOutput includes:
- File type, encoding, compression
- Number of records and fields
- Field types and structure
- Table detection for nested data (JSON/XML)
- AI-generated field descriptions (with
--autodoc) - AI-generated dataset summary (with
--autodoc)
AI Provider Options:
--ai-provider: Choose provider (openai, openrouter, ollama, lmstudio, perplexity)--ai-model: Specify model name (provider-specific)--ai-base-url: Custom API endpoint URL
Supported AI Providers:
-
OpenAI (default if
OPENAI_API_KEYis set)export OPENAI_API_KEY=sk-... undatum analyze data.csv --autodoc --ai-provider openai --ai-model gpt-4o-mini -
OpenRouter (supports multiple models via unified API)
export OPENROUTER_API_KEY=sk-or-... undatum analyze data.csv --autodoc --ai-provider openrouter --ai-model openai/gpt-4o-mini -
Ollama (local models, no API key required)
# Start Ollama and pull a model first: ollama pull llama3.2 undatum analyze data.csv --autodoc --ai-provider ollama --ai-model llama3.2 # Or set custom URL: export OLLAMA_BASE_URL=http://localhost:11434
-
LM Studio (local models, OpenAI-compatible API)
# Start LM Studio and load a model undatum analyze data.csv --autodoc --ai-provider lmstudio --ai-model local-model # Or set custom URL: export LMSTUDIO_BASE_URL=http://localhost:1234/v1
-
Perplexity (backward compatible, uses
PERPLEXITY_API_KEY)export PERPLEXITY_API_KEY=pplx-... undatum analyze data.csv --autodoc --ai-provider perplexity
Configuration Methods:
AI provider can be configured via:
-
Environment variables (lowest precedence):
export UNDATUM_AI_PROVIDER=openai export OPENAI_API_KEY=sk-...
-
Config file (medium precedence): Create
undatum.yamlin your project root or~/.undatum/config.yaml:ai: provider: openai api_key: ${OPENAI_API_KEY} # Can reference env vars model: gpt-4o-mini timeout: 30
-
CLI arguments (highest precedence):
undatum analyze data.csv --autodoc --ai-provider openai --ai-model gpt-4o-mini
Converts data between different formats. Supports CSV, JSON Lines, BSON, XML, XLS, XLSX, Parquet, AVRO, and ORC.
# XML to JSON Lines
undatum convert --tagname item data.xml data.jsonl
# CSV to Parquet
undatum convert data.csv data.parquet
# JSON Lines to CSV
undatum convert data.jsonl data.csvSupported conversions:
| From / To | CSV | JSONL | BSON | JSON | XLS | XLSX | XML | Parquet | ORC | AVRO |
|---|---|---|---|---|---|---|---|---|---|---|
| CSV | - | âś“ | âś“ | - | - | - | - | âś“ | âś“ | âś“ |
| JSONL | âś“ | - | - | - | - | - | - | âś“ | âś“ | - |
| BSON | - | âś“ | - | - | - | - | - | - | - | - |
| JSON | - | âś“ | - | - | - | - | - | - | - | - |
| XLS | - | âś“ | âś“ | - | - | - | - | - | - | - |
| XLSX | - | âś“ | âś“ | - | - | - | - | - | - | - |
| XML | - | âś“ | - | - | - | - | - | - | - | - |
Extracts field names from data files. Works with CSV, JSON Lines, BSON, and XML files.
undatum headers data.jsonl
undatum headers data.csv --limit 50000Generates detailed statistics about your dataset including field types, uniqueness, lengths, and more.
undatum stats data.jsonl
undatum stats data.csv --checkdatesStatistics include:
- Field types and array flags
- Unique value counts and percentages
- Min/max/average lengths
- Date field detection
Calculates frequency distribution for specified fields.
undatum frequency --fields category data.jsonl
undatum frequency --fields status,region data.csvExtracts all unique values from specified field(s).
# Single field
undatum uniq --fields category data.jsonl
# Multiple fields (unique combinations)
undatum uniq --fields status,region data.jsonlSelects and reorders columns from files. Supports filtering.
undatum select --fields name,email,status data.jsonl
undatum select --fields name,email --filter "`status` == 'active'" data.jsonlSplits datasets into multiple files based on chunk size or field values.
# Split by chunk size
undatum split --chunksize 10000 data.jsonl
# Split by field value
undatum split --fields category data.jsonlValidates data against built-in or custom validation rules.
# Validate email addresses
undatum validate --rule common.email --fields email data.jsonl
# Validate Russian INN
undatum validate --rule ru.org.inn --fields VendorINN data.jsonl --mode stats
# Output invalid records
undatum validate --rule ru.org.inn --fields VendorINN data.jsonl --mode invalidAvailable validation rules:
common.email- Email address validationcommon.url- URL validationru.org.inn- Russian organization INN identifierru.org.ogrn- Russian organization OGRN identifier
Generates data schemas from files. Supports Cerberus and other schema formats.
undatum schema data.jsonl
undatum schema data.jsonl --output schema.yamlQuery data using MistQL query language (experimental).
undatum query data.jsonl "SELECT * WHERE status = 'active'"Flattens nested data structures into key-value pairs.
undatum flatten data.jsonlApplies a transformation script to each record in the file.
undatum apply --script transform.py data.jsonl output.jsonlundatum can process files inside compressed containers (ZIP, GZ, BZ2, XZ, ZSTD) with minimal memory usage.
# Process file inside ZIP archive
undatum headers --format-in jsonl data.zip
# Process XZ compressed file
undatum uniq --fields country --format-in jsonl data.jsonl.xzMost commands support filtering using expressions:
# Filter by field value
undatum select --fields name,email --filter "`status` == 'active'" data.jsonl
# Complex filters
undatum frequency --fields category --filter "`price` > 100" data.jsonlFilter syntax:
- Field names:
`fieldname` - String values:
'value' - Operators:
==,!=,>,<,>=,<=,and,or
Automatic date/datetime field detection:
undatum stats --checkdates data.jsonlThis uses the qddate library to automatically identify and parse date fields.
Override automatic detection:
undatum headers --encoding cp1251 --delimiter ";" data.csv
undatum convert --encoding utf-8 --delimiter "," data.csv data.jsonlJSON Lines is a text format where each line is a valid JSON object. It combines JSON flexibility with line-by-line processing capabilities, making it ideal for large datasets.
{"name": "Alice", "age": 30}
{"name": "Bob", "age": 25}
{"name": "Charlie", "age": 35}Standard comma-separated values format. undatum automatically detects delimiters (comma, semicolon, tab) and encoding.
Binary JSON format used by MongoDB. Efficient for binary data storage.
XML files can be converted to JSON Lines by specifying the tag name containing records.
Provider not found:
# Error: No AI provider specified
# Solution: Set environment variable or use --ai-provider
export UNDATUM_AI_PROVIDER=openai
# or
undatum analyze data.csv --autodoc --ai-provider openaiAPI key not found:
# Error: API key is required
# Solution: Set provider-specific API key
export OPENAI_API_KEY=sk-...
export OPENROUTER_API_KEY=sk-or-...
export PERPLEXITY_API_KEY=pplx-...Ollama connection failed:
# Error: Connection refused
# Solution: Ensure Ollama is running and model is pulled
ollama serve
ollama pull llama3.2
# Or specify custom URL
export OLLAMA_BASE_URL=http://localhost:11434LM Studio connection failed:
# Error: Connection refused
# Solution: Start LM Studio server and load a model
# In LM Studio: Start Server, then:
export LMSTUDIO_BASE_URL=http://localhost:1234/v1Structured output errors:
- All providers now use JSON Schema for reliable parsing
- If a provider doesn't support structured output, it will fall back gracefully
- Check provider documentation for model compatibility
- OpenAI: Requires API key, supports
gpt-4o-mini,gpt-4o,gpt-3.5-turbo, etc. - OpenRouter: Unified API for multiple providers, supports models from OpenAI, Anthropic, Google, etc.
- Ollama: Local models, no API key needed, but requires Ollama to be installed and running
- LM Studio: Local models, OpenAI-compatible API, requires LM Studio to be running
- Perplexity: Requires API key, uses
sonarmodel by default
- Use appropriate formats: Parquet/ORC for analytics, JSONL for streaming
- Compression: Use ZSTD or GZIP for better compression ratios
- Chunking: Split large files for parallel processing
- Filtering: Apply filters early to reduce data volume
- Streaming: undatum streams data by default for low memory usage
- AI Documentation: Use local providers (Ollama/LM Studio) for faster, free documentation generation
- Batch Processing: AI descriptions are generated per-table, consider splitting large datasets
The analyze command can automatically generate field descriptions and dataset summaries using AI when --autodoc is enabled. This feature supports multiple LLM providers and uses structured JSON output for reliable parsing.
# Basic AI documentation (auto-detects provider from environment)
undatum analyze data.csv --autodoc
# Use OpenAI with specific model
undatum analyze data.csv --autodoc --ai-provider openai --ai-model gpt-4o-mini
# Use local Ollama model
undatum analyze data.csv --autodoc --ai-provider ollama --ai-model llama3.2
# Use OpenRouter to access various models
undatum analyze data.csv --autodoc --ai-provider openrouter --ai-model anthropic/claude-3-haiku
# Output to YAML with AI descriptions
undatum analyze data.csv --autodoc --output schema.yaml --outtype yamlCreate undatum.yaml in your project:
ai:
provider: openai
model: gpt-4o-mini
timeout: 30Or use ~/.undatum/config.yaml for global settings:
ai:
provider: ollama
model: llama3.2
ollama_base_url: http://localhost:11434Generate descriptions in different languages:
# English (default)
undatum analyze data.csv --autodoc --lang English
# Russian
undatum analyze data.csv --autodoc --lang Russian
# Spanish
undatum analyze data.csv --autodoc --lang SpanishWith --autodoc enabled, the analyzer will:
- Field Descriptions: Generate clear, concise descriptions for each field explaining what it represents
- Dataset Summary: Provide an overall description of the dataset based on sample data
Example output:
tables:
- id: data.csv
fields:
- name: customer_id
ftype: VARCHAR
description: "Unique identifier for each customer"
- name: purchase_date
ftype: DATE
description: "Date when the purchase was made"
description: "Customer purchase records containing transaction details"# 1. Analyze source data
undatum analyze source.xml
# 2. Convert to JSON Lines
undatum convert --tagname item source.xml data.jsonl
# 3. Validate data
undatum validate --rule common.email --fields email data.jsonl --mode invalid > invalid.jsonl
# 4. Get statistics
undatum stats data.jsonl > stats.json
# 5. Extract unique categories
undatum uniq --fields category data.jsonl > categories.txt
# 6. Convert to Parquet for analytics
undatum convert data.jsonl data.parquet# Check for duplicate emails
undatum frequency --fields email data.jsonl | grep -v "1$"
# Validate all required fields
undatum validate --rule common.email --fields email data.jsonl
undatum validate --rule common.url --fields website data.jsonl
# Generate schema with AI documentation
undatum schema data.jsonl --output schema.yaml --autodoc# 1. Analyze dataset with AI-generated descriptions
undatum analyze sales_data.csv --autodoc --ai-provider openai --output analysis.yaml
# 2. Review generated field descriptions
cat analysis.yaml
# 3. Use descriptions in schema generation
undatum schema sales_data.csv --autodoc --output documented_schema.yaml
# 4. Bulk schema extraction with AI documentation
undatum schema_bulk ./data_dir --autodoc --output ./schemas --mode distinctContributions are welcome! Please feel free to submit a Pull Request.
MIT License - see LICENSE file for details.
For questions, issues, or feature requests, please open an issue on GitHub.