Releases: spiceai/spiceai
v1.9.0-rc.2
Spice v1.9.0-rc.2 (Nov 11, 2025)
This is the second release candidate for v1.9.0, which introduces Spice Cayenne, a new high-performance data accelerator built on the Vortex columnar format that delivers better than DuckDB performance without single-file scaling limitations and a preview of Multi-Node Distributed Query based on Apache Ballista. v1.9.0-rc.2 also upgrades to DataFusion v50 and DuckDB v1.4.1 for even higher query performance, expands search capabilities with full-text search on views and multi-column embeddings, includes significant DynamoDB and DuckDB accelerator improvements, expands the HTTP data connector to support endpoints as tables, and delivers many security and reliability improvements.
What's New in v1.9.0-rc.2
Cayenne Data Accelerator (Beta)
Introducing Cayenne: SQL as an Acceleration Format: A new high-performance Data Accelerator that simplifies multi-file data acceleration by using an embedded database (SQLite) for metadata while storing data in the Vortex columnar format, a Linux Foundation project. Cayenne delivers query and ingestion performance better than DuckDB's file-based acceleration without DuckDB's memory overhead and the scaling challenges of single DuckDB files.
Cayenne uses SQLite to manage acceleration metadata (schemas, snapshots, statistics, file tracking) through simple SQL transactions, while storing data in Vortex's compressed columnar format. This architecture provides:
Key Features:
- SQLite + Vortex Architecture: All metadata is stored in SQLite tables with standard SQL transactions, while data lives in Vortex's compressed, chunked columnar format designed for zero-copy access and efficient scanning.
- Simplified Operations: No complex file hierarchies, no JSON/Avro metadata files, no separate catalog servers—just SQL tables and Vortex data files. The entire metadata schema is intentionally simple for maximum reliability.
- Fast Metadata Access: Single SQL query retrieves all metadata needed for query planning—no multiple round trips to storage, no S3 throttling, no reconstruction of metadata state from scattered files.
- Efficient Small Changes: Dramatically reduces small file proliferation. Snapshots are just rows in SQLite tables, not new files on disk. Supports millions of snapshots without performance degradation.
- High Concurrency: Changes consist of two steps: stage Vortex files (if any), then run a single SQL transaction. Much faster conflict resolution and support for many more concurrent updates than file-based formats.
- Advanced Data Lifecycle: Full ACID transactions, delete support, and retention SQL execution on refresh commit.
Example Spicepod.yml configuration:
datasets:
- from: s3:my_table
name: accelerated_data_30d
acceleration:
enabled: true
engine: cayenne
mode: file
refresh_mode: append
retention_sql: DELETE FROM accelerated_data WHERE created_at < NOW() - INTERVAL '30 days'Note, the Cayenne Data Accelerator is in Beta with limitations.
For more details, refer to the Cayenne Documentation, the Vortex project, and the DuckLake announcement that partly inspired this design.
Multi-Node Distributed Query (Preview)
Apache Ballista Integration: Spice now supports distributed query execution based on Apache Ballista, enabling distributed queries across multiple executor nodes for improved performance on large datasets. This feature is in preview in v1.9.0-rc.2.
Architecture:
A distributed Spice cluster consists of:
- Scheduler: Responsible for distributed query planning and work queue management for the executor fleet
- Executors: One or more nodes responsible for running physical query plans
Getting Started:
Start a scheduler instance using an existing Spicepod. The scheduler is the only spiced instance that needs to be configured:
# Start scheduler (note the flight bind address override if you want it reachable outside localhost)
spiced --cluster-mode scheduler --flight 0.0.0.0:50051Start one or more executors configured with the scheduler's flight URI:
# Start executor (automatically selects a free port if 50051 is taken)
spiced --cluster-mode executor --scheduler-url spiced://localhost:50051Query Execution:
Queries run through the scheduler will now show a distributed_plan in EXPLAIN output, demonstrating how the query is distributed across executor nodes:
EXPLAIN SELECT count(id) FROM my_dataset;Current Limitations:
- Accelerated datasets are currently not supported. This feature is designed for querying partitioned data lake formats (Parquet, Delta Lake, Iceberg, etc.)
- The feature is in preview and may have stability or performance limitations
- Specific acceleration support is planned for future releases
DataFusion v50 Upgrade
Spice.ai is built on the Apache DataFusion query engine. The v50 release brings significant performance improvements and enhanced reliability:
Performance Improvements 🚀:
-
Dynamic Filter Pushdown: Enhanced dynamic filter pushdown for custom
ExecutionPlans, ensuring filters propagate correctly through all physical operators for improved query performance. -
Partition Pruning: Expanded partition pruning support ensures that unnecessary partitions are skipped when filters are not used, reducing data scanning overhead and improving query execution times.
Apache Spark Compatible Functions: Added support for Spark-compatible functions including array, bit_get/bit_count, bitmap_count, crc32/sha1, date_add/date_sub, if, last_day, like/ilike, luhn_check, mod/pmod, next_day, parse_url, rint, and width_bucket.
Bug Fixes & Reliability: Resolved issues with partition name validation and empty execution plans when vector index lists are empty. Fixed timestamp support for partition expressions, enabling better partitioning for time-series data.
See the Apache DataFusion 50.0.0 Release for more details.
DuckDB v1.4.1 Upgrade and Accelerator Improvements
DuckDB v1.4.1: DuckDB has been upgraded to v1.4.1, which includes several performance optimizations.
Composite ART Index Support: DuckDB in Spice now supports composite (multi-column) Adaptive Radix Tree (ART) indexes for accelerated table scans. When queries filter on multiple columns fully covered by a composite index, the optimizer automatically uses index scans instead of full table scans, delivering significant performance improvements for selective queries.
Example configuration:
datasets:
- from: file://data.parquet
name: sales
acceleration:
enabled: true
engine: duckdb
indexes:
'(region, product_id)': enabledPerformance example with composite index on 7.5M rows:
SELECT * FROM sales WHERE region = 'US' AND product_id = 12345;
-- Without index: 0.282s
-- With composite index (region, product_id): 0.037s
-- Performance improvement: 7.6x faster with composite indexDuckDB Intermediate Materialization: Queries with indexes now use intermediate materialization (WITH ... AS MATERIALIZED) to leverage faster index scans. Currently supported for non-federated queries (query_federation: disabled) against a single table with indexes only. When predicates cover more columns than the index, the optimizer rewrites queries to first materialize index-filtered results, then apply remaining predicates. This optimization can deliver significant performance improvements for selective queries.
Example configuration:
datasets:
- from: file://sales_data.parquet
name: sales
acceleration:
enabled: true
engine: duckdb
mode: file
params:
query_federation: disabled # Required currently for intermediate materialization
indexes:
'(region, product_id)': enabledPerformance example:
-- Query with indexed columns (region, product_id) plus additional filter (amount)
SELECT * FROM sales
WHERE region = 'US' AND product_id = 12345 AND amount > 1000;
-- Optimized execution time: 0.031s (with intermediate materialization)
-- Standard execution time: 0.108s (without optimization)
-- Performance improvement: ~3.5x fasterThe optimizer automatically rewrites the query to:
WITH _intermediate_materialize AS MATERIALIZED (
SELECT * FROM sales WHERE region = 'US' AND product_id = 12345
)
SELECT * FROM _intermediate_materialize WHERE amount > 1000;Parquet Buffering for Partitioned Writes: DuckDB partitioned writes in table mode now support Parquet buffering, reducing memory usage and improving write performance for large datasets.
Retention SQL on Refresh Commit: DuckDB accelerations now support running retention SQL on refresh commit, enabling automatic data cleanup and lifecycle management during refresh operations.
UTC Timezone for DuckDB: DuckDB now uses UTC as the default timezone, ensuring consistent behavior for time-based queries across different environments.
Example Spicepod.yml configuration:
datasets:
- from: s3://my_bucket/large_table/
name: partitioned_data
acceleration:
enabled: true
...v1.7.3
Spice v1.7.3 (Nov 06, 2025)
Spice v1.7.3 is a focused patch release that improves AWS SDK credential handling by adding retry logic for transient network failures.
What's Fixed
- AWS SDK credential resilience: Improved credential initialization with automatic retry using Fibonacci backoff for ConnectorError failures resulting in more reliable connections to AWS services.
Upgrading
To upgrade to v1.7.3, use one of the following methods:
CLI:
spice upgradeHomebrew:
brew upgrade spiceai/spiceai/spiceDocker:
Pull the spiceai/spiceai:1.7.3 image:
docker pull spiceai/spiceai:1.7.3For available tags, see DockerHub.
Helm:
helm repo update
helm upgrade spiceai spiceai/spiceaiAWS Marketplace:
🎉 Spice is available in the AWS Marketplace.
What's Changed
- Only retry credentials on ConnectorError (
03501ac) by @phillipleblanc and @kczimm
v1.9.0-rc.1
Spice v1.9.0-rc.1 (Nov 3, 2025)
This is the first release candidate for v1.9.0, which introduces Cayenne, a new high-performance data accelerator built on the Vortex columnar format that delivers DuckDB-comparable performance without scaling limitations. This release also upgrades to DataFusion v50 for improved query performance, expands search capabilities with full-text search on views and multi-column embeddings, includes significant DynamoDB and DuckDB accelerator improvements, and delivers security and reliability enhancements.
What's New in v1.9.0-rc.1
Cayenne Data Accelerator (Alpha)
Introducing Cayenne: SQL as an Acceleration Format: A new high-performance data accelerator that simplifies multi-file data acceleration by using an embedded database (SQLite) for metadata while storing data in the Vortex columnar format. Cayenne delivers query and ingestion performance comparable or better to DuckDB's file-based acceleration without DuckDB's memory overhead and the scaling challenges of single DuckDB files.
Cayenne uses SQLite to manage acceleration metadata (schemas, snapshots, statistics, file tracking) through simple SQL transactions, while storing actual data in Vortex's compressed columnar format. This architecture provides:
Key Features:
- SQLite + Vortex Architecture: All metadata is stored in SQLite tables with standard SQL transactions, while data lives in Vortex's compressed, chunked columnar format designed for zero-copy access and efficient scanning.
- Simplified Operations: No complex file hierarchies, no JSON/Avro metadata files, no separate catalog servers—just SQL tables and Vortex data files. The entire metadata schema is intentionally simple for maximum reliability.
- Fast Metadata Access: Single SQL query retrieves all metadata needed for query planning—no multiple round trips to storage, no S3 throttling, no reconstruction of metadata state from scattered files.
- Efficient Small Changes: Dramatically reduces small file proliferation. Snapshots are just rows in SQLite tables, not new files on disk. Supports millions of snapshots without performance degradation.
- High Concurrency: Changes consist of two steps: stage Vortex files (if any), then run a single SQL transaction. Much faster conflict resolution and support for many more concurrent updates than file-based formats.
- Advanced Data Lifecycle: Full ACID transactions, delete support, and retention SQL execution on refresh commit.
Example Spicepod.yml configuration:
datasets:
- from: s3:my_table
name: accelerated_data
acceleration:
enabled: true
engine: cayenne
retention:
sql: DELETE FROM accelerated_data WHERE created_at < NOW() - INTERVAL '30 days'Note, the Cayenne Data Accelerator is in Alpha with limitations.
For more details, refer to the Cayenne Documentation, the Vortex project, and the DuckLake announcement that partly inspired this design.
DataFusion v50 Upgrade
Spice.ai is built on the DataFusion query engine. The v50 release brings significant performance improvements and enhanced reliability:
Performance Improvements 🚀:
- Dynamic Filter Pushdown: Enhanced dynamic filter pushdown for custom
ExecutionPlans, ensuring filters propagate correctly through all physical operators for improved query performance. - Partition Pruning: Expanded partition pruning support ensures that unnecessary partitions are skipped when filters are not used, reducing data scanning overhead and improving query execution times.
Bug Fixes & Reliability: Resolved issues with partition name validation and empty execution plans when vector index lists are empty. Fixed timestamp support for partition expressions, enabling better partitioning for time-series data.
See the Apache DataFusion 50.0.0 Release for more details.
DynamoDB Data Connector Improvements
Improved Query Performance: The DynamoDB Data Connector now includes improved filter handling for edge cases, parallel scan support for faster data ingestion, and better error handling for misconfigured queries. These improvements enable more reliable and performant access to DynamoDB data.
Example Spicepod.yml configuration:
datasets:
- from: dynamodb:my_table
name: ddb_data
params:
scan_segments: 10 # Default `auto` which calculates optimal segments based on number of rowsSearch & Embeddings Enhancements
Full-Text Search on Views: Full-text search indexes are now supported on views, enabling advanced search scenarios over pre-aggregated or transformed data. This extends the power of Spice's search capabilities beyond base datasets.
Multi-Column Embeddings on Views: Views now support embedding columns, enabling vector search and semantic retrieval on view data. This is useful for search over aggregated or joined datasets.
Vector Engines on Views: Vector search engines are now available for views, enabling similarity search over complex queries and transformations.
Example Spicepod.yml configuration:
views:
- name: aggregated_reviews
sql: SELECT review_id, review_text FROM reviews WHERE rating > 4
embeddings:
- column: review_text
model: openai:text-embedding-3-smallDuckDB Accelerator Improvements
Parquet Buffering for Partitioned Writes: DuckDB partitioned writes in table mode now support Parquet buffering, reducing memory usage and improving write performance for large datasets.
Retention SQL on Refresh Commit: DuckDB accelerations now support running retention SQL on refresh commit, enabling automatic data cleanup and lifecycle management during refresh operations.
UTC Timezone for DuckDB: DuckDB now uses UTC as the default timezone, ensuring consistent behavior for time-based queries across different environments.
Example Spicepod.yml configuration:
datasets:
- from: s3://my_bucket/large_table/
name: partitioned_data
acceleration:
enabled: true
engine: duckdb
mode: file
retention:
sql: DELETE FROM partitioned_data WHERE event_time < NOW() - INTERVAL '7 days'Query Performance Optimizations
Optimized Prepared Statements: Prepared statement handling has been optimized for better performance with parameterized queries, reducing planning overhead and improving execution time for repeated queries.
Large RecordBatch Chunking: Large Arrow RecordBatch objects are now automatically chunked to control memory usage during query execution, preventing memory exhaustion for queries returning large result sets.
Security & Reliability Improvements
Enhanced HTTP Client Security: HTTP client usage across the runtime has been hardened with improved TLS validation, certificate pinning for critical endpoints, and better error handling for network failures.
ODBC Connector Improvements: Removed unwrap calls from the ODBC connector, improving error handling and reliability. Fixed secret handling and Kubernetes secret integration.
CLI Permissions Hardening: Tightened file permissions for the CLI and install script, ensuring secure defaults for configuration files and credentials.
Oracle Instant Client Pinning: Oracle Instant Client downloads are now pinned to specific SHAs, ensuring reproducible builds and preventing supply chain attacks.
Observability & Tracing
DataFusion Log Emission: The Spice runtime now emits DataFusion internal logs, providing deeper visibility into query planning and execution for debugging and performance analysis.
AI Completions Tracing: Fixed tracing so that ai_completions operations are correctly parented under sql_query traces, improving observability for AI-powered queries.
Git Data Connector (Alpha)
Version-Controlled Data Access: The new Git Data Connector (Alpha) enables querying datasets stored in Git repositories. This connector is ideal for use cases involving configuration files, documentation, or any data tracked in version control.
Example Spicepod.yml configuration:
datasets:
- from: git:https://github.com/myorg/myrepo
name: git_metrics
params:
file_format: csvFor more details, refer to the Git Data Connector Documentation.
Additional Improvements & Bug Fixes
- Reliability: Fixed refresh worker panics with recovery handling to prevent runtime crashes during acceleration refreshes.
- Reliability: Improved error messages for missing or invalid
spicepod.yamlfiles, providing actionable feedback for misconfiguration. - Reliability: Fixed DuckDB metadata pointer loading issues for snapshots.
- Performance: Ensured
ListingTablepartitions are pruned correctly when filters are not used. - Reliability: Fixed vector dimension determination for partitioned indexes.
- Search: Fixed casing issues in Reciprocal Rank Fusion (RRF) for hybrid search queries.
- Search: Fixed search field handling as metadata for chunked search indexes.
- Validation: Added timestamp support for partition expressions.
- Validation: Fixed
regexp_matchfunction for DuckDB datasets. - Validation: Fixed partition name validation for improved reliability.
Contributors
v1.7.2
Spice v1.7.2 (Oct 30, 2025)
Spice v1.7.2 is a focused patch release that hardens dataset refresh handling when a downstream dependency panics. Instances now recover automatically from refresh worker panics triggered by downstream dependencies (such as corrupted Parquet files), and operators gain visibility into these events through a new metric.
What's Fixed
- Refresh worker resilience: The acceleration refresh loop now catches and recovers from panics raised by the underlying Arrow Parquet reader (for example, when a Parquet file is modified mid-read). The refresh worker immediately resumes normal operation and surfaces a clear error instead of stalling future refresh attempts.
- Telemetry visibility: Added the Prometheus counter
dataset_acceleration_refresh_worker_panics, emitted with a per-dataset label whenever a refresh worker panic is observed. This enables alerting on unexpected refresh interruptions even though recovery is automatic.
Upgrading
To upgrade to v1.7.2, use one of the following methods:
CLI:
spice upgradeHomebrew:
brew upgrade spiceai/spiceai/spiceDocker:
Pull the spiceai/spiceai:1.7.2 image:
docker pull spiceai/spiceai:1.7.2For available tags, see DockerHub.
Helm:
helm repo update
helm upgrade spiceai spiceai/spiceaiAWS Marketplace:
🎉 Spice is available in the AWS Marketplace.
What's Changed
- Handle refresh worker panics and add recovery test (
b01a8d9) by @phillipleblanc
v1.8.3
Spice v1.8.3 (Oct 27, 2025)
Spice v1.8.3 is a patch release focused on performance, reliability, and observability. This release delivers optimizations for DuckDB acceleration, parameterized queries, and query plans. A new opt-in dedicated thread pool for queries is now in preview.
What's New in v1.8.3
DuckDB Data Accelerator Improvements
- Connection Pool Sizing: The DuckDB accelerator now supports a configurable
connection_pool_sizeparameter, supporting fine-grained control over concurrent query execution. This enables tuning for high-concurrency workloads and improved resource utilization.
Example Spicepod.yaml snippet:
datasets:
- from: postgres:my_table
name: my_table
acceleration:
enabled: true
engine: duckdb
params:
connection_pool_size: 10- Automatic Statistics Recomputation: The new
on_refresh_recompute_statisticsparameter, on by default, triggers automaticANALYZEexecution after refreshes. This keeps DuckDB optimizer statistics up-to-date, ensuring efficient query plans and optimal performance.
Example Spicepod.yaml snippet:
datasets:
- from: postgres:my_table
name: my_table
acceleration:
enabled: true
engine: duckdb
params:
on_refresh_recompute_statistics: disabled # default enabledTask History SQL Query Plan Capture & Configuration
Spice now supports automated SQL query plan capture and store (via EXPLAIN or EXPLAIN ANALYZE) in the task history, enabling deeper analysis and debugging of query execution. This feature is configurable, supporting control of which queries are included based on duration thresholds and plan type.
- New Configuration Options:
task_history.captured_plan: Controls which plan is captured (none,explain, orexplain analyze). Defaultnone.task_history.min_sql_duration: Minimum query duration before a plan is captured.task_history.min_plan_duration: Minimum plan execution duration before a plan is captured.
Example spicepod.yaml snippet:
runtime:
task_history:
captured_plan: explain analyze
min_sql_duration: 5s
min_plan_duration: 10sQuery plans are captured asynchronously to avoid blocking query execution. The result of the plan is stored in the standard sql_query output in the task history.
Learn more in the Task History Documentation.
Query Performance Optimizations
-
Optimized Prepared Statements (Parameterized Queries): Prepared statement caching for parameterized SQL queries has been improved, reducing planning overhead for repeated queries with different parameters. This results in faster execution and lower latency for workloads that reuse query structures.
-
Limit Pushdown via BytesProcessedExec: Introduces the
BytesProcessedExecphysical operator, enabling limit pushdown for large datasets. This optimization reduces the amount of data processed and improves top-k query performance.
Dedicated Query Thread Pool (Opt-In)
Spice now supports running query execution and accelerated refreshes on a dedicated thread pool, separate from the HTTP server. This prevents heavy query workloads from slowing down API responses, keeping health and readiness checks fast. Opt-In for v1.8.3: This feature is opt-in for this release and will become enabled by default (opt-out) in v1.9.
Example Spicepod.yaml snippet:
runtime:
params:
dedicated_thread_pool: sql_engine # Default: disabledValidation & Reliability Improvements
-
Selective Evaluation Scorer Loading: Evaluation scorers are now loaded only when evaluation is explicitly defined, reducing unnecessary initialization and improving startup performance.
-
Improved Error Reporting: Enhanced error messages for misconfigured full-text search (FTS) on datasets and views, providing actionable feedback for configuration issues.
REPL & Usability
- Execution Time Display: The Spice REPL now displays query execution time even when queries return no results, improving user feedback and diagnostics.
Contributors
Breaking Changes
No breaking changes.
Cookbook Updates
No major cookbook updates.
The Spice Cookbook includes 81 recipes to help you get started with Spice quickly and easily.
Upgrading
To upgrade to v1.8.3, use one of the following methods:
CLI:
spice upgradeHomebrew:
brew upgrade spiceai/spiceai/spiceDocker:
Pull the spiceai/spiceai:1.8.3 image:
docker pull spiceai/spiceai:1.8.3For available tags, see DockerHub.
Helm:
helm repo update
helm upgrade spiceai spiceai/spiceaiAWS Marketplace:
🎉 Spice is now available in the AWS Marketplace!
What's Changed
Changelog
- Fix generate spicepod schema by @phillipleblanc in #7464
- Only load eval scorers when eval defined by @Jeadie in #7549
- BytesProcessedExec to allow optimizer to do limit pushdown by @mach-kernel in #7539
- Enhancement: Add
spill_compressionto runtime config by @krinart in #7505 - Task History
min_sql_durationfilter support by @lukekim in #7698 - Show error if FTS is misconfigured for datasets/views by @krinart in #7458
- project_schema when using EmptyExec by @kczimm in #7543
- Fix score order for one test case by @Jeadie in #7595
- Fix license issue in table-providers by @phillipleblanc in #7620
- Split integration tests into 3 partitions by @phillipleblanc in #7635
- Fix OSS docker release trigger when release marked as latest by @phillipleblanc in #7668
- Properly set auth headers in github_release.py by @krinart in #7560
- Run Datafusion queries on a separate Tokio runtime by @phillipleblanc in #7586
- Update BytesProcessedExec snapshots by @mach-kernel in #7637
- Display execution time in Spice REPL for no results by @sgrebnov in #7713
- Add support for DuckDB
connection_pool_sizeparam by @sgrebnov in #7716 - Task History capture and store SQL query plans by @lukekim in #7701
v1.8.2
Spice v1.8.2 (Oct 21, 2025)
Spice v1.8.2 is a patch release focused on reliability, validation, performance, and bug fixes, with improvements across DuckDB acceleration, S3 Vectors, document tables, and HTTP search.
What's New in v1.8.2
Support Table Relations in /v1/search HTTP Endpoint
Spice now supports table relations for the additional_columns and where parameters in the /v1/search endpoint. This enables improved search for multi-dataset use cases, where filters and columns can be used on specific datasets.
Example:
curl 'http://localhost:8090/v1/search' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' -d '{
"text": "hello world",
"additional_columns": ["tbl1.foo", "tbl2.bar", "baz"],
"where": "tbl1.foo > 100000",
"limit": 5
}'In this example, search results from the tbl1 dataset will include columns foo and baz, where foo > 100000. For tbl2, columns bar and baz will be returned.
DuckDB Data Accelerator Table Partitioning & Indexing
- Configurable DuckDB Index Scan: DuckDB acceleration now supports configurable
duckdb_index_scan_percentageandduckdb_index_scan_max_countparameters, supporting fine-tuning of index scan behavior for improved query performance.
Example:
datasets:
- from: postgres:my_table
name: my_table
acceleration:
enabled: true
engine: duckdb
mode: file
params:
# When combined, DuckDB will use an index scan when the number of qualifying rows is less than the maximum of these two thresholds
duckdb_index_scan_percentage: '0.10' # 10% as decimal
duckdb_index_scan_max_count: '1000'-
Hive-Style Partitioning: In file-partitioned mode, the DuckDB data accelerator uses Hive-style partitioning for more efficient file management.
-
Table-Based Partitioning: Spice now supports partitioning DuckDB accelerations within a single file. This approach maintains ACID guarantees for full and append mode refreshes, while optimizing resource usage and improving query performance. Configure via the
partition_modeparameter:
datasets:
- from: file:test_data.parquet
name: test_data
params:
file_format: parquet
acceleration:
enabled: true
engine: duckdb
mode: file
params:
partition_mode: tables
partition_by:
- bucket(100, Field1)S3 Vectors Reliability
- Race Condition Fix: Resolved a race condition in S3 Vectors index and bucket creation. The runtime also now checks if an index or bucket exists after a
ConflictException, ensuring robust error handling during index creation and improving reliability for large-scale multi-index vector search.
Document Table Improvements
- Primary Key Update: Document tables now use the
locationcolumn as the primary key, improving performance, consistency, and query reliability.
Additional Improvements & Bugfixes
- Reliability: Improved error handling and resource checks for S3 Vectors and DuckDB acceleration.
- Validation: Expanded validation for partitioning and index creation.
- Performance: Optimized partition refresh and index scan logic.
- Bugfix: Don't nullify DuckDB release callbacks for schemas.
Contributors
Breaking Changes
No breaking changes.
Cookbook Updates
No major cookbook updates.
The Spice Cookbook includes 81 recipes to help you get started with Spice quickly and easily.
Upgrading
To upgrade to v1.8.2, use one of the following methods:
CLI:
spice upgradeHomebrew:
brew upgrade spiceai/spiceai/spiceDocker:
Pull the spiceai/spiceai:1.8.2 image:
docker pull spiceai/spiceai:1.8.2For available tags, see DockerHub.
Helm:
helm repo update
helm upgrade spiceai spiceai/spiceaiAWS Marketplace:
🎉 Spice is now available in the AWS Marketplace!
What's Changed
Changelog
- Update mongo config for benchmarks by @krinart in #7546
- Configurable DuckDB duckdb_index_scan_percentage & duckdb_index_scan_max_count by @lukekim in #7551
- Fix race condition in S3 Vectors index and bucket creation by @kczimm in #7577
- Use 'location' as primary key for document tables by @Jeadie in #7567
- Update official Docker builds to use release binaries by @phillipleblanc in #7597
- Hive-style partitioning for DuckDB file mode by @kczimm in #7563
- New Generate Changelog workflow by @krinart in #7562
- Add support for DuckDB table-based partitioning by @sgrebnov in #7581
- DuckDB table partitioning: delete partitions that no longer exist after full refresh by @sgrebnov in #7614
- Rename
duckdb_partition_modetopartition_modeparam by @sgrebnov in #7622 - Fix license issue in table-providers by @phillipleblanc in #7620
- Make DuckDB table partition data write threshold configurable by @sgrebnov in #7626
- fix: Don't nullify DuckDB release callbacks for schemas by @peasee in #7628
- Fix integration tests by reverting the use of batch inserts w/ prepared statements by @phillipleblanc in #7630
- Return TableProvider from CandidateGeneration::search by @Jeadie in #7559
- Handle table relations in HTTP v1/search by @Jeadie in #7615
v1.8.1
Spice v1.8.1 (Oct 13, 2025)
Spice v1.8.1 is a patch release that adds Acceleration Snapshots Indexes, and includes a number of bug fixes and performance improvements.
What's New in v1.8.1
Acceleration Snapshot Indexes
-
Management of Acceleration Snapshots has been improved by adopting an Iceberg-inspired
metadata.json, which now encodes pointer IDs, schema serialization, and robust checksum and size, which is validate before loading the snapshot. -
Acceleration Snapshot Metrics: The following metrics are now available for Acceleration Snapshots:
-
dataset_acceleration_snapshot_bootstrap_duration_ms: The time it took the runtime to download the snapshot - only emitted when it initially downloads the snapshot. -
dataset_acceleration_snapshot_bootstrap_bytes: The number of bytes downloaded to bootstrap the acceleration from the snapshot. -
dataset_acceleration_snapshot_bootstrap_checksum: The checksum of the snapshot used to bootstrap the acceleration. -
dataset_acceleration_snapshot_failure_count: Number of failures encountered when writing a new snapshot at the end of the refresh cycle. A snapshot failure does not prevent the refresh from completing. -
dataset_acceleration_snapshot_write_timestamp: Unix timestamp in seconds when the last snapshot was completed. -
dataset_acceleration_snapshot_write_duration_ms: The time it took to write the snapshot to object storage. -
dataset_acceleration_snapshot_write_bytes: The number of bytes written on the last snapshot write. -
dataset_acceleration_snapshot_write_checksum: The SHA256 checksum of the last snapshot write.
To learn more, see the Acceleration Snapshots Documentation and the Metrics Documentation.
Improved Regular Expression for DuckDB acceleration
Regular expression support has been expanded when using DuckDB acceleration for functions like regexp-like and regexp_match.
For more details, refer to the SQL Reference for the list of available regular expression functions.
Additional Improvements & Bugfixes
- Reliability: Resolved an issue with partitioning on empty partition sets.
- Validation: Added better validation for incorrectly configured Spicepods.
- Reliability: Fixed
partition_byaccelerations when a projection is applied on empty partition sets. - Performance: Ensured
ListingTablepartitions are pruned when filters are not used. - Performance: Don't download acceleration snapshots if the acceleration is already present.
- Performance: Refactored some blocking I/O and synchronization in the async codebase by moving operations to
tokio::task::spawn_blocking, replacing blocking locks with async-friendly variants. - Bugfix: Nullable fields are now supported for S3 Vectors index columns.
Contributors
Breaking Changes
No breaking changes.
Cookbook Updates
- New Accelerated Snapshots Recipe - The recipe shows how to bootstrap DuckDB accelerations from object storage to skip cold starts.
The Spice Cookbook includes 81 recipes to help you get started with Spice quickly and easily.
Upgrading
To upgrade to v1.8.1, use one of the following methods:
CLI:
spice upgradeHomebrew:
brew upgrade spiceai/spiceai/spiceDocker:
Pull the spiceai/spiceai:1.8.1 image:
docker pull spiceai/spiceai:1.8.1For available tags, see DockerHub.
Helm:
helm repo update
helm upgrade spiceai spiceai/spiceaiAWS Marketplace:
🎉 Spice is now available in the AWS Marketplace!
What's Changed
Changelog
- Remove println in datafusion by @phillipleblanc in #7461
- fix: Ensure ListingTable partitions are pruned when filters are not used by @peasee in #7471
- Create
runtime-secretscrate by @phillipleblanc in #7474 - Create
runtime-parameterscrate by @phillipleblanc in #7475 - Don't download the snapshot if the acceleration is present by @phillipleblanc in #7477
- Add support for S3 dataset params by @phillipleblanc in #7476
- Add better snapshot validation for incorrectly configured spicepods by @phillipleblanc in #7487
- Move blocking/sync I/O to spawn blocking by @lukekim in #7462
- Validate spicepod file exists before running tests by @lukekim in #7492
- Make snapshot reading/writing more robust with Iceberg-like metadata.json by @phillipleblanc in #7486
- Create
runtime-request-contextcrate by @Jeadie in #7459 - Two minor fixes for AI udf tests by @krinart in #7503
- Add model response timeout for ai udf tests by @krinart in #7504
- Add sccache for build test operator by @lukekim in #7515
- Fix partition_by accelerations when a projection is applied on empty partition sets by @phillipleblanc in #7526
- Nullable fields for index columns by @Jeadie in #7523
v1.8.0
Spice v1.8.0 (Oct 6, 2025)
Spice v1.8.0 delivers major advances in data writes, scalable vector search, and now in preview—managed acceleration snapshots for fast cold starts. This release introduces write support for Iceberg tables using standard SQL INSERT INTO, partitioned S3 Vector indexes for petabyte-scale vector search, and preview of the AI SQL function for direct LLM integration in SQL. Additional improvements include improved reliability, and the v3.0.3 release of the Spice.js Node.js SDK.
What's New in v1.8.0
Iceberg Table Write Support (Preview)
Append Data to Iceberg Tables with SQL INSERT INTO: Spice now supports writing to Iceberg tables and catalogs using standard SQL INSERT INTO statements. This enables data ingestion, transformation, and pipeline use cases—no Spark or external writer required.
- Append-only: Initial version targets appends; no overwrite or delete.
- Schema validation: Inserted data must match the target table schema.
- Secure by default: Writes are only enabled for datasets or catalogs explicitly marked with
access: read_write.
Example Spicepod configuration:
catalogs:
- from: iceberg:https://glue.ap-northeast-3.amazonaws.com/iceberg/v1/catalogs/111111/namespaces
name: ice
access: read_write
datasets:
- from: iceberg:https://iceberg-catalog-host.com/v1/namespaces/my_namespace/tables/my_table
name: iceberg_table
access: read_writeExample SQL usage:
-- Insert from another table
INSERT INTO iceberg_table
SELECT * FROM existing_table;
-- Insert with values
INSERT INTO iceberg_table (id, name, amount)
VALUES (1, 'John', 100.0), (2, 'Jane', 200.0);
-- Insert into catalog table
INSERT INTO ice.sales.transactions
VALUES (1001, '2025-01-15', 299.99, 'completed');Note: Only Iceberg datasets and catalogs with
access: read_writesupport writes. Internal Spice tables and other connectors remain read-only.
Learn more in the Iceberg Data Connector documentation.
Acceleration Snapshots for Fast Cold Starts (Preview)
Bootstrap Managed Accelerations from Object Storage: Spice now supports managed acceleration snapshots in preview, enabling datasets accelerated with file-based engines (DuckDB or SQLite) to bootstrap from a snapshot stored in object storage (such as S3) if the local acceleration file does not exist on startup. This dramatically reduces cold start times and enables ephemeral storage for accelerations with persistent recovery.
Key features:
- Rapid readiness: Datasets can become ready in seconds by downloading a pre-built snapshot, skipping lengthy initial acceleration.
- Hive-style partitioning: Snapshots are organized by month, day, and dataset for easy retention and management.
- Flexible bootstrapping: Configurable fallback and retry behavior if a snapshot is missing or corrupted.
Example Spicepod configuration:
snapshots:
enabled: true
location: s3://some_bucket/some_folder/ # Folder for storing snapshots
bootstrap_on_failure_behavior: warn # Options: warn, retry, fallback
params:
s3_auth: iam_role # All S3 dataset params accepted here
datasets:
- from: s3://some_bucket/some_table/
name: some_table
params:
file_format: parquet
s3_auth: iam_role
acceleration:
enabled: true
snapshots: enabled # Options: enabled, disabled, bootstrap_only, create_only
engine: duckdb
mode: file
params:
duckdb_file: /nvme/some_table.dbHow it works:
- On startup, if the acceleration file does not exist, Spice checks the snapshot location for the latest snapshot and downloads it.
- Snapshots are stored as:
s3://some_bucket/some_folder/month=2025-09/day=2025-09-30/dataset=some_table/some_table_<timestamp>.db - If no snapshot is found, a new acceleration file is created as usual.
- Snapshots are written after each refresh (unless configured otherwise).
Supported snapshot modes:
enabled: Download and write snapshots.bootstrap_only: Only download on startup, do not write new snapshots.create_only: Only write snapshots, do not download on startup.disabled: No snapshotting.
Note: This feature is only supported for file-based accelerations (DuckDB or SQLite) with dedicated files.
Why use acceleration snapshots?
- Faster cold starts: Skip waiting for full acceleration on startup.
- Ephemeral storage: Use fast local disks (e.g., NVMe) for acceleration, with persistent recovery from object storage.
- Disaster recovery: Recover from federated source outages by bootstrapping from the latest snapshot.
Learn more in the Acceleration Snapshots documentation.
Partitioned S3 Vector Indexes
Efficient, Scalable Vector Search with Partitioning: Spice now supports partitioning Amazon S3 Vector indexes and scatter-gather queries using a partition_by expression in the dataset vector engine configuration. Partitioned indexes enable faster ingestion, lower query latency, and scale to billions of vectors.
Example Spicepod configuration:
datasets:
- name: reviews
vectors:
enabled: true
engine: s3_vectors
params:
s3_vectors_bucket: my-bucket
s3_vectors_index: base-embeddings
partition_by:
- 'bucket(50, PULocationID)'
columns:
- name: body
embeddings:
from: bedrock_titan
- name: title
embeddings:
from: bedrock_titanSee the Amazon S3 Vectors documentation for details.
AI SQL function for LLM Integration (Preview)
LLMs Directly In SQL: A new asynchronous ai SQL function enables direct calls to LLMs from SQL queries for text generation, translation, classification, and more. This feature is released in preview and supports both default and model-specific invocation.
Example Spicepod model configuration:
models:
- name: gpt-4o
from: openai:gpt-4o
params:
openai_api_key: ${secrets:openai_key}Example SQL usage:
-- basic usage with default model
SELECT ai('hi, this prompt is directly from SQL.');-- basic usage with specified model
SELECT ai('hi, this prompt is directly from SQL.', 'gpt-4o');-- Using row data as input to the prompt
SELECT ai(concat_ws(' ', 'Categorize the zone', Zone, 'in a single word. Only return the word.')) AS category
FROM taxi_zones
LIMIT 10;Learn more in the SQL Reference AI documentation.
Spice.js v3.0.3 SDK
Spice.js v3.0.3 Released: The official Spice.ai Node.js/JavaScript SDK has been updated to v3.0.3, bringing cross-platform support, new APIs, and improved reliability for both Node.js and browser environments.
- Modern Query Methods: Use
sql(),sqlJson(), andnsql()for flexible querying, streaming, and natural language to SQL. - Browser Support: SDK now works in browsers and web applications, automatically selecting the optimal transport (gRPC or HTTP).
- Health Checks & Dataset Refresh: Easily monitor Spice runtime health and trigger dataset refreshes on demand.
- Automatic HTTP Fallback: If gRPC/Flight is unavailable, the SDK falls back to HTTP automatically.
- Migration Guidance: v3 requires Node.js 20+, uses camelCase parameters, and introduces a new package structure.
Example usage:
import { SpiceClient } from '@spiceai/spice';
const client = new SpiceClient(apiKey);
const table = await client.sql('SELECT * FROM my_table LIMIT 10');
console.table(table.toArray());See Spice.js SDK documentation for full details, migration tips, and advanced usage.
Additional Improvements
- Reliability: Improved logging, error handling, and network readiness checks across connectors (Iceberg, Databricks, etc.).
- Vector search durability and scale: Refined logging, stricter default limits, safeguards against index-only scans and duplicate results, and always-accessible metadata for robust queryability at scale.
- Cache behavior: Tightened cache logic for modification queries.
- Full-Text Search: FTS metadata columns now usable in projections; max search results increased to 1000.
- RRF Hybrid Search: Reciprocal Rank Fusion (RRF) UDTF enhancements for advanced hybrid search scenarios.
Contributors
Breaking Changes
This release introduces two breaking changes associated with the search observability and tooling.
Firstly, the document_similarity tool has been renamed to search. This has the equivalent change to tracing of these tool calls:
## Old: v1.7.1
>> spice trace tool_use::document_similarity
>> curl -XPOST http://localhost:8090/v1/tools/document_similarity \
-d '{
"datasets": ["my_tbl"],
"text": "Welcome to another Spice release"
}'
## New: v1.8.0
>> spice trace tool_use::search
>> curl -XPOST http://localhost:8090/v1/tools/search \
-d '{
"datasets": ["my_tbl"],
"text": "Welcome...v1.7.1
Spice v1.7.1 (Sep 29, 2025)
Spice v1.7.1 is a patch release focused on search improvements, bug fixes, and performance enhancements. This release introduces the Reciprocal Rank Fusion (RRF) user-defined table function (UDTF) for hybrid search, improves vector and text search reliability, and resolves several issues across the runtime, connectors, and query engine.
What's New in v1.7.1
Reciprocal Rank Fusion (RRF) UDTF: Spice now supports Reciprocal Rank Fusion (RRF) as a user-defined table function, enabling advanced hybrid search scenarios that combine results from multiple search methods (e.g., vector and text search) for improved relevance ranking.
Features:
- Multi-search fusion: Combine results from
vector_search,text_search, and other search UDTFs in a single query. - Advanced tuning: Per-query ranking weights, recency boosting, and configurable decay functions.
- Performance: Optional user-specified join key for optimal performance.
- Automatic joining: Falls back to on-the-fly JOIN key computation when no explicit key is provided.
Example usage:
SELECT id, title, content, fused_score
FROM rrf(
vector_search(documents, 'machine learning algorithms', rank_weight => 1.5),
text_search(documents, 'neural networks deep learning', rank_weight => 1.2),
join_key => 'id', -- optional join key for optimal performance
k => 60.0 -- optional smoothing factor
)
WHERE fused_score > 0.01
ORDER BY fused_score DESC;Learn more in the RRF documentation.
Acceleration Refresh Metrics: Spice now exposes additional Prometheus metrics that provide detailed observability into dataset acceleration refreshes. These metrics help monitor data freshness and ingestion lag for accelerated datasets with a time column.
Reported metrics:
| Metric Name | Description |
|---|---|
dataset_acceleration_max_timestamp_before_refresh_ms |
Maximum value of the dataset's time column before refresh (milliseconds). |
dataset_acceleration_max_timestamp_after_refresh_ms |
Maximum value of the dataset's time column after refresh (milliseconds). |
dataset_acceleration_refresh_lag_ms |
Difference between max timestamp after and before refresh (milliseconds). |
dataset_acceleration_ingestion_lag_ms |
Lag between current wall-clock time and max timestamp after refresh (milliseconds). |
These metrics are emitted during each acceleration refresh and can be scraped by Prometheus for monitoring and alerting. For more details, see the Observability documentation.
Bug Fixes & Improvements
This release resolves several issues and improves reliability across search, connectors, and query planning:
- Full-Text Search (FTS): Ensure FTS metadata columns can be used in projection, fix JOIN-level filters not having columns in schema, and adds support for persistent file-based FTS indexes. Default limit of 1000 results if no limit specified.
- Vector Search: Default limit of 1000 results if no limit specified, and fix removing embedding column.
- Databricks SQL Warehouse: Improved error handling and support for async queries.
- Other: Fixes for Anthropic model regex validation, tweaked AI-model health checks, and improved error messages.
Contributors
Breaking Changes
No breaking changes.
Cookbook Updates
- Added Hybrid-Search using RRF - Combine results from multiple search methods (vector and text search) using Reciprocal Rank Fusion for improved relevance ranking.
The Spice Cookbook includes 78 recipes to help you get started with Spice quickly and easily.
Upgrading
To upgrade to v1.7.1, use one of the following methods:
CLI:
spice upgradeHomebrew:
brew upgrade spiceai/spiceai/spiceDocker:
Pull the spiceai/spiceai:1.7.1 image:
docker pull spiceai/spiceai:1.7.1For available tags, see DockerHub.
Helm:
helm repo update
helm upgrade spiceai spiceai/spiceaiAWS Marketplace:
🎉 Spice is now available in the AWS Marketplace!
What's Changed
Changelog
- ensure FTS metadata columns can be used in projection (#7282) by @Jeadie in #7282
- Fix JOIN level filters not having columns in schema (#7287) by @Jeadie in #7287
- Use file-based fts index (#7024) by @Jeadie in #7024
- Remove 'PostApplyCandidateGeneration' (#7288) by @Jeadie in #7288
- RRF: Rank and recency boosting (#7294) by @mach-kernel in #7294
- RRF: Preserve base ranking when results differ -> FULL OUTER JOIN does not produce time column (#7300) by @mach-kernel in #7300
- fix removing embedding column (#7302) by @Jeadie in #7302
- RRF: Fix decay for disjoint result sets (#7305) by @mach-kernel in #7305
- RRF: Project top scores, do not yield duplicate results (#7306) by @mach-kernel in #7306
- RRF: Case sensitive column/ident handling (#7309) by @mach-kernel in #7309
- For
vector_search, use a default limit of 1000 if no limit specified (#7311) by @lukekim in #7311 - Fix Anthropic model regex and add validation tests (#7319) by @ewgenius in #7319
- Enhancement: Implement before/after/lag metrics for acceleration refresh (#7310) by @krinart in #7310
- Refactor chat model health check to lower tokens usage for reasoning models (#7317) by @ewgenius in #7317
- Enable chunking in
SearchIndex(#7143) by @Jeadie in #7143 - Use logical plan in
SearchQueryProvider. (#7314) by @Jeadie in #7314 - FTS max search results 100 -> 1000 (#7331) by @Jeadie in #7331
- Improve Databricks SQL Warehouse Error Handling (#7332) by @sgrebnov in #7332
- use spicepod embedding model name for 'model_name' (#7333) by @Jeadie in #7333
- Handle async queries for Databricks SQL Warehouse API (#7335) by @phillipleblanc in #7335
- RRF: Fix ident resolution for struct fields, autohashed join key for varying types (#7339) by @mach-kernel in #7339
v1.7.0
Spice v1.7.0 (Sep 23, 2025)
Spice v1.7.0 upgrades to DataFusion v49 for improved performance and query optimization, introduces real-time full-text search indexing for CDC streams, EmbeddingGemma support for high-quality embeddings, new search table functions powering the /v1/search API, embedding request caching for faster and cost-efficient search and indexing, and OpenAI Responses API tool calls with streaming. This release also includes numerous bug fixes across CDC streams, vector search, the Kafka Data Connector, and error reporting.
What's New in v1.7.0
DataFusion v49 Highlights
Performance Improvements 🚀
- Equivalence System Upgrade: Faster planning for queries with many columns, enabling more sophisticated sort-based optimizations.
- Dynamic Filters & TopK Pushdown: Queries with
ORDER BYandLIMITnow use dynamic filters and physical filter pushdown, skipping unnecessary data reads for much faster top-k queries. - Compressed Spill Files: Intermediate files written during sort/group spill to disk are now compressed, reducing disk usage and improving performance.
- WITHIN GROUP for Ordered-Set Aggregates: Support for ordered-set aggregate functions (e.g.,
percentile_disc) withWITHIN GROUP. - REGEXP_INSTR Function: Find regex match positions in strings.
See the DataFusion 49.0.0 Release Blog for details.
Spice Runtime Highlights
EmbeddingGemma Support: Spice now supports EmbeddingGemma, Google's state-of-the-art embedding model for text and documents. EmbeddingGemma provides high-quality, efficient embeddings for semantic search, retrieval, and recommendation tasks. You can use EmbeddingGemma via HuggingFace in your Spicepod configuration:
Example spicepod.yml snippet:
embeddings:
- from: huggingface:huggingface.co/google/embeddinggemma-300m
name: embeddinggemma
params:
hf_token: ${secrets:HUGGINGFACE_TOKEN}Learn more about EmbeddingGemma in the official documentation.
POST /v1/search API Use Search Table Functions: The /v1/search API now uses the new text_search and vector_search Table Functions for improved performance.
Embedding Request Caching: The runtime now supports caching embedding requests, reducing latency and cost for repeated content and search requests.
Example spicepod.yml snippet:
runtime:
caching:
embeddings:
enabled: true
max_size: 128mb
item_ttl: 5sSee the Caching documentation for details.
Real-Time Indexing for Full Text Search: Full Text search indexing is now supported for connectors that enable real-time changes, such as Debezium CDC streams. Adding a full-text index on a column with refresh_mode: changes works as it does for full/append-mode refreshes, enabling instant search on new data.
Example spicepod.yml snippet:
datasets:
- from: debezium:cdc.public.question
name: questions
acceleration:
enabled: true
engine: duckdb
primary_key: id
refresh_mode: changes # Use 'changes'
params: *kafka_params
columns:
- name: title
full_text_search:
enabled: true # Enable full-text-search indexing
row_id:
- idOpenAI Responses API Tool Calls with Streaming: The OpenAI Responses API now supports tool calls with streaming, enabling advanced model interactions such as web_search and code_interpreter with real-time response streaming. This allows you to invoke OpenAI-hosted tools and receive results as they are generated.
Learn more in the OpenAI Model Provider documentation.
Runtime Output Level Configuration: You can now set the output_level parameter in the Spicepod runtime configuration to control logging verbosity in addition to the existing CLI and environment variable support. Supported values are info, verbose, and very_verbose. The value is applied in the following priority: CLI, environment variables, then YAML configuration.
Example spicepod.yml snippet:
runtime:
output_level: info # or verbose, very_verboseFor more details on configuring output level, see the Troubleshooting documentation.
Bug Fixes
Several bugs and issues have been resolved in this release, including:
- CDC Streams: Fixed issues where
refresh_mode: changescould prevent the Spice runtime from becoming Ready, and improved support for full-text indexing on CDC streams. - Vector Search: Fixed bugs where vector search HTTP pipeline could not find more than one IndexedTableProvider, and resolved errors with field mismatches in
vector_searchUDTF. - Kafka Integration: Improved Kafka schema inference with configurable sample size, improved consumer group persistence for SQLite and Postgres accelerations, and added cooperative mode support.
- Perplexity Web Search: Fixed bug where Perplexity web search sometimes used incorrect query schema (limit).
- Databricks: Fixed issue with unparsing embedded columns.
- Error Reporting: ThrottlingException is now reported correctly instead of as InternalError.
- Iceberg Data Connector: Added support for LIMIT pushdown.
- Amazon S3 Vectors: Fixed ingestion issues with zero-vectors and improved handling when vector index is full.
- Tracing: Fixed vector search tracing to correctly report SQL status.
Contributors
- @Jeadie
- @peasee
- @sgrebnov
- @kczimm
- @phillipleblanc
- @Advayp
- @lukekim
- @ewgenius
- @mach-kernel
- @krinart
- @ChrisTomAlxHitachi
New Contributors
- @ChrisTomAlxHitachi made their first contribution in github.com/spiceai/spiceai/pull/6932 🎉
Breaking Changes
No breaking changes.
Cookbook Updates
- New Spice with Dotnet SDK Recipe - The recipe shows how to query Spice using the Dotnet SDK.
The Spice Cookbook includes 78 recipes to help you get started with Spice quickly and easily.
Upgrading
To upgrade to v1.7.0, use one of the following methods:
CLI:
spice upgradeHomebrew:
brew upgrade spiceai/spiceai/spiceDocker:
Pull the spiceai/spiceai:1.7.0 image:
docker pull spiceai/spiceai:1.7.0For available tags, see DockerHub.
Helm:
helm repo update
helm upgrade spiceai spiceai/spiceaiAWS Marketplace:
🎉 Spice is now available in the AWS Marketplace!
What's Changed
Dependencies
- Rust: Upgraded from 1.88.0 to 1.89.0
- DataFusion: Upgraded from 48.0.1 to 49.0.0
- text-embeddings-inference: Upgraded from 1.7.3 to 1.8.2
- twox-hash: Upgraded from 1.6.3 to 2.1.0.
Changelog
- Fix parameterised query planning in DataFusion by @Jeadie in #6942
- fix: Update benchmark snapshots by @app/github-actions in #6944
- refactor: Decouple full text search candidate from UDTF by @peasee in #6940
- fix: Re-enable search integration tests by @peasee in #6930
- Update acknowledgements and spicepod.schema.json by @sgrebnov in #6948
- Add enabling the responses API by @lukekim in #6949
- Post-release housekeeping by @sgrebnov in #6951
- Add missing param in release notes by @Advayp in #6959
- Create comprehensive S3vectors test by @Jeadie in #6903
- Update ROADMAP after v1.6 release by @sgrebnov in #6955
- Update openapi.json by @app/github-actions in #6961
- Add build step for new spiced images in end game template by @Jeadie in #6960
- refactor: Use text search UDTF in v1/search by [@peasee](http...