Releases: dolthub/dolt
Releases · dolthub/dolt
1.81.10
Merged PRs
dolt
- 10496: /{docker,go,integration-tests}: install git in docker containers, fix .git suffix on clone in sql-server
go-mysql-server
- 3427: Do not wrap hoisted subquery in Limit node until AFTER checking if it's a SubqueryAlias
Wrapping subquery nodes in aLimitnode before checking if it was aSubqueryAliaswas causing us to not findSubqueryAliasnodes. This was noticed when investigating #10472. We should probably be inspecting the whole node instead of simply checking the node type, but this is a quick fix.
filed #10494 to add better test coverage and address TODOs - 3426: Do not push filters into SubqueryAliases that wrap RecursiveCTEs
fixes #10472
All RecursiveCTEs are wrapped by a SubqueryAlias node. We currently don't push filters through a RecursiveCTE so pushing the filter below the SubqueryAlias just causes it to be awkwardly sandwiched in between and might make some queries less optimal if the filter contains outerscope columns since the SQA gets marked as uncacheable. We probably can push filters through a RecursiveCTE in some cases, but we should spend more time thinking about what that means (see #10490).
Also contains some refactors that I noticed along the way.
skipped tests will be fixed in #3427
Closed Issues
1.81.9
Merged PRs
dolt
- 10487: Fix
dolt_cleannot respectingdolt_ignoreanddolt_nonlocal_tablespatterns
Fix #10462- Fix
dolt_ignoreanddolt_nonlocal_tablesnot being excluded fromdolt cleandefault command and SQL interface. - Add
dolt clean-xflag to overridedolt_ignoredeletions; similar togit clean -x.
- Fix
- 10482: commit verification with dolt_tests
This PR adds the ability to perform verification of commit content using the dolt_tests table.
Setting thedolt_commit_verification_groupsystem variable with a comma delimited set of test groups will result in test being run before a commit is completed.
This validation is performed for commit, merge, cherry-pick, and rebase (through cherry-pick). All procedures/CLI operations provide the --skip-verification flag to bypass.
Currently one known bug with a skipped test. Rebase workflow is presenting broken when a commit fails verification, but should be handled like a conflict, which would allow the user to --continue the workflow. - 10450: fix panic for empty table names, now a normal error
go-mysql-server
- 3426: Do not push filters into SubqueryAliases that wrap RecursiveCTEs
fixes #10472
All RecursiveCTEs are wrapped by a SubqueryAlias node. We currently don't push filters through a RecursiveCTE so pushing the filter below the SubqueryAlias just causes it to be awkwardly sandwiched in between and might make some queries less optimal if the filter contains outerscope columns since the SQA gets marked as uncacheable. We probably can push filters through a RecursiveCTE in some cases, but we should spend more time thinking about what that means (see #10490).
Also contains some refactors that I noticed along the way.
skipped tests will be fixed in #3427 - 3425: Do not set scope length during join planning
related to #10472
Scope length should only be set when assigning indexes if the scope length then is not zero. The scope length set during join planning is doesn't actually refer to the correct scope length, and if it was actually supposed to be zero, it was never correct set back to zero, causing a panic.
Closed Issues
1.81.8
Merged PRs
dolt
- 10485: Git remote support fixes: shard git blobstore writes and normalize clone/remote URLs
Fix git-remote workflows by defaulting git-blobstore sharding to 50MB and correcting clone dir inference, SSH remote parsing, and empty-remote validation (with tests). - 10483: Add
git+*dbfactory remotes with required--git-cache-dir,--refsupport, and integration tests
Implements git-backed Dolt remotes via dbfactory (git+file/http/https/ssh), requiring --git-cache-dir and supporting --ref, with end-to-end BATS + Go multi-client coverage for push/clone/pull and empty-remote bootstrap. - 10481: [no-review-notes] Removed most of the remaining LD_1 specific codepaths and types
- 10474: NBS-on-Git: add GitBlobstore-backed NBS store and empty-remote bootstrap
Add nbs.NewGitStore wiring plus fetch/push semantics and tests to open against an empty Git remote and bootstrap refs/dolt/data on first write. - 10466: GitBlobstore: add cache
Make GitBlobstore cache fetches. - 10458: GitBlobstore: remote-managed fetch/merge/push sync
- 10429: go/store/nbs: For local databases, crash on fatal I/O errors during writes.
If an fsync fails, or if a critical write(2) calls returns an error against a shared mutable file, it is not safe for the server to keep running because it cannot necessarily guarantee the state of the files as they exist on disk and will exist on disk in the future.
Implement functionality so that the Dolt process cashes in such cases.
go-mysql-server
- 3425: Do not set scope length during join planning
related to #10472
Scope length should only be set when assigning indexes if the scope length then is not zero. The scope length set during join planning is doesn't actually refer to the correct scope length, and if it was actually supposed to be zero, it was never correct set back to zero, causing a panic. - 3423: Do not allow sort-based joins between text and number type columns
fixes #10435
Disables merge and range heap joins between text and number type columns
Closed Issues
1.81.7
Merged PRs
dolt
- 10457: gitblobstore: implement Concatenate for NBS compatibility
Implements GitBlobstore.Concatenate with CAS-safe commits, chunked output support via MaxPartSize, and end-to-end tests to enable NBS blobstore persister/conjoin paths. - 10456: go: sqle/dsess: transactions.go: When serializing transaction commits against a working set, form the key with the normalized db name.
Previously, this lock would accidentally allow concurrent access to writing the database working set value because a non-normalized database name likedb/main\x00/refs/heads/mainwould allow access along with a normalized database name likedb\x00/refs/heads/main. This did not impact correctness, since the working sets are safe for concurrent modification at the storage layer, but it could cause transient failures for a client if the optimistic lock retries failed sequentially enough times.
Here we fix the bug so that the txLocks serialize access to the ref heads as expected. - 10455: Update
DoltTable.ProjectedTags()to distinguish between no setprojectedColsand zeroprojectedCols
fixes #10451
When gettingProjectedTags(), we were not distinguishing between whenprojectedColswasnil(meaning no projections are set so we should return all columns) and whenprojectedColswas an empty array of length 0 (meaning the table has been pruned to be zero columns but we still care about the number of rows), since in both cases,projectedColswould have a length of 0. This was causingLEFT OUTER JOINs that didn't project any left-side columns to not return the correct number of columns. This was fixed by checking for ifprojectedColswasnilinstead (which is what we do in other functions likeProjections()andHistoryTable.ProjectedTags()
Also some minor refactorings:- renamed
getItatogetIndexedTableAccess - removed unused variables returned by
getSourceKv
Test added in dolthub/go-mysql-server#3424
- renamed
- 10444: Optimize BlockOnLock retry loop to eliminate per-iteration allocations
Addresses feedback on #10442 to reduce GC churn in theBlockOnLockretry loop. The original implementation allocated a new timer on each iteration viatime.After(), causing unnecessary memory pressure when locks are held for extended periods.Changes
- Added
lockRetryIntervalconstant: Extracts hardcoded 10ms retry interval into a named constant for clarity and tunability - Replaced
time.Afterwith lazy-initializedtime.Ticker: Single ticker instance reused across all retries, eliminated per-iteration allocations - Optimized fast path: Ticker creation deferred until first lock failure, avoiding overhead when lock is immediately available
Before
for { err = lock.TryLock() if err == nil { break } select { case <-ctx.Done(): return nil, ctx.Err() case <-time.After(10 * time.Millisecond): // New allocation each iteration } }
After
var ticker *time.Ticker defer func() { if ticker != nil { ticker.Stop() } }() for { err = lock.TryLock() if err == nil { break } if ticker == nil { ticker = time.NewTicker(lockRetryInterval) // Allocate once, reuse } select { case <-ctx.Done(): return nil, ctx.Err() case <-ticker.C: } }
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs. - Added
- 10433: fix variable name
- 10430: add benchmark mini command
- 10424: blobstore: add chunked-object mode to GitBlobstore
This PR introducesGitBlobstore, a Blobstore implementation backed by a git repository’s object database (bare repo or .git dir). Keys
are stored as paths in the tree of a commit pointed to by a configured ref (e.g. refs/dolt/data), enabling Dolt remotes to be hosted on
standard git remotes.
High-level design
• Storage model
• Each blobstore key maps to a git tree path under the ref’s commit.
• Small objects are stored as a single git blob at .
• Large objects (when chunking enabled) are stored as a git tree at containing part blobs:
• /00000001, /00000002, … (lexicographically ordered)
• No descriptor header / no stored total size; size is derived by summing part blob sizes.
• Roll-forward only: this PR supports the above formats; it does not include backward-compat for any older descriptor-based chunking
formats.
• Per-key versioning
• Get/Put/CheckAndPut return a per-key version equal to the object id at :
• inline: blob OID
• chunked: tree OID
• IdempotentPut
• For non-manifestkeys, Put fast-succeeds if already exists (assumes content-addressed semantics common in NBS/table files),
returning the existing per-key version without consuming the reader.
• manifest remains mutable and is updated via CheckAndPut.
•CheckAndPutsemantics
• CheckAndPut performs CAS against the current per-key version at (not against the HEAD commit hash).
• Implementation uses a ref-level CAS retry loop:
• re-checks version at current HEAD
• only consumes/hashes the reader after the expected version matches
• retries safely if the ref advances due to unrelated updates
• Blob↔tree transitions
• Handles transitions between inline blob and chunked tree representations by proactively removing conflicting index paths before
staging new entries (avoids git index file-vs-directory conflicts).
Internal git plumbing additions
Adds/uses a unified internal GitAPI abstraction to support:
• resolving path objects and types (blob vs tree)
• listing tree entries for chunked reads
• removing paths from the index in bare repos
• staging and committing new trees, with configurable author/committer identity fallback - 10419: GitBlobstore: implement
CheckAndPutCAS semantics + add tests
This PR adds the next write-path primitive to GitBlobstore:CheckAndPutwith proper compare-and-swap behavior, and a focused test suite (including a concurrency/CAS-failure scenario). - 10418: Various bug fixes for checking foreign key constraints during merge
This PR mainly addresses the need to perform type conversions when performing index lookups when determining whether a diff introduces a foreign key constraint violation. The old code assumed that the key values were binary identical between parent and child table, and this isn't always the case (esp in Doltgres).
Also fixes a related bug in constructing the primary key from a secondary key, which occurs when a secondary index contains primary key columns.
go-mysql-server
- 3419: Use placeholder
ColumnIds forEmptyTable
Fixes #10434
EmptyTableimplementsTableIdNodeso it was usingColumns()to get theColumnIds.EmptyTable.WithColumns()is only ever called for testing purposes; as a result, theColSetreturned is empty. This causes the column toColumnIdmapping to be incorrectly off set, leading to the wrong index id assigned.
This fix adds a case forEmptyTableincolumnIdsForNodeto add placeholderColumnIdvalues so the mappings are correctly aligned. I considered setting the actualColSetforEmptyTablebut there's actually not a good way to do that. Regardless, the index id will be set either using the name of the column or using the Projector node that wraps the EmptyTable.
Similar toSetOp,EmptyTableprobably shouldn't be aTableIdNode(see #10443) - 3417: Do not join remaining tables with CrossJoins during buildSingleLookupPlan
fixes #10304
Despite what the comment said, it's not safe to join remaining tables with CrossJoins duringbuildSingleLookupPlan. It is only safe to do so if every filter has been successfully matched tocurrentlyJoinedTables. Otherwise, we end up dropping filters.
For example, we could have a query likeselect from A, B, inner join C on B.c0 <=> C.c0where table A has a primary key and tables B and C are keyless.columnKeymatches A's primary key column and A would be added tocurrentlyJoinedTables. Since the only filter references B and C and neither are part ofcurrentlyJoinedTabes, nothing is ever added tojoinCandidates. However, it's unsafe to join all the tables with CrossJoins because we still need to account for the filter on B and C. - 3416: allow Doltgres to add more information schema tables
- 3415: Simplify Between expressions for GetField arguments
fixes #10284
part of #10340
benchmarks
Closed Issues
1.81.6
Merged PRs
dolt
- 10417: GitBlobstore: implement
Putwith CAS retries + configurable identity; addPuttests
This PR adds the first GitBlobstore write path:GitBlobstore.Put, implemented on top of the existing internal/git.GitAPI plumbing. It also adds unit tests for Put, including a contention scenario to verify we don’t clobber concurrent writers. - 10416: add
HashObject+ GitAPIImpl write-primitive tests
This PR extends the unified internal/git plumbing API with a streaming blob-write primitive and adds targeted unit coverage for the write building blocks we’ll use to implement GitBlobstore write paths. - 10414: add internal git ReadAPI/WriteAPI impl scaffolding + refactor GitBlobstore reads
This PR advances the Git-backed blobstore.Blobstore prototype by adding a principled internal git write plumbing layer (still unused by GitBlobstore for now) and refactoring read plumbing into aReadAPIinterface + concrete impl to match the write side. - 10413: /go/store/{blobstore,nbs,testutils}: add ref constants
- 10412: /.github/actions/ses-email-action: bump aws ses client
- 10409: Add read-only GitBlobstore
This PR introduces an initial read-only GitBlobstore implementation to enable treating a git object database (bare repo / .git dir) as a Dolt blobstore, without a checkout. - 10407: Correctly identify which secondary indexes need to be updated during a merge
Some columns in a table have a different encoding when used in a secondary index key: mainly TEXT and BLOB types are an address encoding in the primary index, but when used in a secondary index (with a length prefix), the prefixes get stored in the secondary index as an inline encoding.
We have a check during merge that determines whether or not we need to rebuild a secondary index from scratch instead of simply merging in changes from the other branch. This is necessary because in fringe situations, a row that did not change on the other branch's primary index did change on the other branch's secondary index. This happens when the other branch changed the secondary index's schema, such as dropping or adding columns, changing a length prefix, etc.
In these situations, it is not safe to simply diff the primary indexes and update the changed rows in the secondary index.
When I originally wrote this check, I compared the schema of the merged table's index with the original table's index to determine whether the other branch introduced a change. However, I deliberately ignored things like the TEXT special-treatment when doing this check. The motivation for doing it this way was to ensure that if the other branch changes a TEXT type to a VARCHAR type (or vice versa), we will still detect this change even though the encoding of the index doesn't change.
In practice we don't need to worry about that, because such changes still get detected and handled correctly elsewhere in the merge process. And the check as written is incorrect, and leads to situations where if the merger erroneously concludes that an index will be dropped and rebuild, and thus doesn't update it during the merge.
This PR fixes the logic of the check and adds additional tests to verify that the merge behavior is still correct. - 10406: Add support for full binlog row metadata
Adds support for sending additional table map metadata (e.g. column names) when operating as a binlog primary server. This mode is activated when@@binlog_row_metadatais set toFULL. Also adds integration tests for the Python mysql-replication library to test this feature. - 10351: Bump lodash from 4.17.21 to 4.17.23 in /integration-tests/mysql-client-tests/node
Bumps lodash from 4.17.21 to 4.17.23.Commits
dec55b7Bump main to v4.17.23 (#6088)19c9251fix: setCacheHas JSDoc return type should be boolean (#6071)b5e6729jsdoc: Add -0 and BigInt zeros to _.compact falsey values list (#6062)edadd45Prevent prototype pollution on baseUnset function4879a7adoc: fix autoLink function, conversion of source links (#6056)9648f69chore: removeyarn.lockfile (#6053)dfa407dci: remove legacy configuration files (#6052)156e196feat: add renovate setup (#6039)933e106ci: add pipeline for Bun (#6023)072a807docs: update links related to Open JS Foundation (#5968)- Additional commits viewable in compare view
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/dolthub/dolt/network/alerts). - 9960: Bump com.mysql:mysql-connector-j from 8.0.33 to 8.2.0 in /integration-tests/mysql-client-tests/java
Bumps com.mysql:mysql-connector-j from 8.0.33 to 8.2.0.Changelog
Sourced from com.mysql:mysql-connector-j's changelog.
Changelog
https://dev.mysql.com/doc/relnotes/connector-j/en/
Version 9.4.0
-
Fix for Bug#116120 (Bug#37079448), Inappropriate charset selected for connection when jdk.charsets not included.
-
Fix for Bug#98620 (Bug#31503893), Using DatabaseMetaData.getColumns() gives collation mix error.
-
Fix for Bug#118389 (Bug#38044940), OCI ephemeral keys not working after change in OCI CLI.
-
Fix for Bug#22473405, GETOBJECT(STRING , CLASS) METHOD RETURNS ERROR FOR POOLED CONNECTIO...
-
1.81.5
Merged PRs
dolt
- 10405: dolt_commit respects foreign key checks
Whenforeign_key_checksis set to 0,dolt_commitshould allow staged working states with foreign key violations.
Fixes issue #5605 - 10401: go: sqle: Fix CREATE DATABASE ... COLLATE ... when a non-Dolt database is the current connection's database.
- 10384: Add
editsupport to rebase.
This changes adds the abilityeditin the middle of arebase --interactivesession.
go-mysql-server
- 3410: Replace
Time.Subcall inTIMESTAMPDIFFwithmicrosecondsDiff
fixes #10397
Time.Subdoesn't work for times with a difference greater than 9,223,372,036,854,775,807 (2^63 - 1) nanoseconds, or ~292.47 years. This is becauseTime.Subreturns aDuration, which is really anint64representing nanoseconds. MySQL only stores time precision to the microsecond so we actually don't care about the difference in nanoseconds.
However, there's no easy way to directly expose the number of microseconds or seconds since epoch using the public functions forTime-- this is because seconds since epoch are encoded differently with different epochs depending on whether the time is monotonic or not (Jan 1, 1885 UTC or Jan 1, 0001 UTC).
Time.SubusesTime.secto normalizeTimeobjects to seconds since the Jan 1, 0001 UTC epoch. ButTime.secisn't public so we can't call it ourselves. AndTime.SecondandTime.Nanosecondonly give the second and nanosecond portion of a wall time, not the seconds/nanoseconds since an epoch. However,Time.UnixMicrodoes give us the microseconds since Unix epoch (January 1, 1970 UTC)...by callingTime.secand then converting that to Unix time.
SomicrosecondsDiffcalculates the difference in microseconds between twoTimeobjects, getting their microsecond values by callingTime.UnixMicroon both of them. This isn't the most efficient but it's the best we can do with public functions. - 3409: Calculate
monthandquarterfor timestampdiff based on date and clock time values
fixes #10393
Refactors logic foryearinto separate functionmonthDiffso that it can be reused formonthandquarter - 3408: Added runner to hooks
This adds asql.StatementRunnerto all hooks, as they may want to execute logic depending on the hook statement. For example, cascade table deletions could literally just runDROPon the cascading items when dropping a table rather than trying to manually craft nodes which are subject to change over time. - 3407: Calculate
yearfortimestampdiffbased on date and clock time values
fixes #10390
Previous calculation incorrectly assumed every year has 365 days (doesn't account for leap years) and month has 30 days (doesn't account for over half the months). The offset from this wasn't noticeable with smaller time differences but became apparent with larger time differences. This still needs to be fixed formonthandquarter(#10393) - 3400: Allow pushing filters past Project nodes
The analyzer tries to improve query plans by moving filter expressions to be directly above the table that they reference. Previously, this had a limitation in that it would treat references to aliases in Project nodes as opaque: if the alias expression references a table, the analysis wouldn't consider the filter to reference that table.
As a result, it wasn't possible to push a filter into multiple subqueries unless a previous optimization eliminated the Project node.
This PR enhances the analysis with the following steps:- Every alias on a Project node has a unique column id, so we walk the plan tree to build a map from Project column ids to the underlying expressions.
- When computing which filter expressions can be pushed to which tables, we normalize the filter expressions by replacing GetFields on a Project with the underlying expression.
- When pushing a Filter above a table or into a subquery, we also replace GetFields on a Project with the underlying expression.
Reasoning about the safety is tricky here. We should replace a GetField with the underlying expression if and only if we're actually moving the Filter beneath the Project.
The main concerns would be: - If a join plan has two Project nodes without an opaque node (like a SubqueryAlias) between them, then a Filter might only get pushed beneath one Project, but references to both Projects in the filter expression could get unwrapped.
- If a join plan has a Project node in one child, and a Table in the other, then a filter expression could get pushed to be above the Table even if it references the Project
In practice I don't think the first concern can happen because it would require that the filter is getting pushed to some nameable-but-not-opaque node between the Projects, which I don't think exists.
The second concern requires that the project aliases an expression that doesn't reference any tables but can't be replaced with its underlying expression in a filter, and I don't think that's possible either.
vitess
- 453: Add support for serializing optional table map metadata
To support@@binlog_row_metadata = 'FULL', we need support for serializing optional table map metadata. - 452: Bump google.golang.org/grpc from 1.24.0 to 1.56.3
Bumps google.golang.org/grpc from 1.24.0 to 1.56.3.
Closed Issues
1.81.4
Merged PRs
dolt
- 10392: avoid
math.PowinweibullCheck - 10389: go/store/nbs: journal_record.go: Do have processJournalRecords log errors when it sees an early EOF.
Logging the error here was overly conservative. - 10387: /.github/workflows/bump-dependency.yaml: fix autoclosing
go-mysql-server
- 3407: Calculate
yearfortimestampdiffbased on date and clock time values
fixes #10390
Previous calculation incorrectly assumed every year has 365 days (doesn't account for leap years) and month has 30 days (doesn't account for over half the months). The offset from this wasn't noticeable with smaller time differences but became apparent with larger time differences. This still needs to be fixed formonthandquarter(#10393) - 3406: Set table names to lower when creating table map
fixes #10385
Closed Issues
- 10390: TIMESTAMPDIFF produces incorrect results for
yearunits
1.81.3
Merged PRs
dolt
- 10383: Fix SQL engine panic when database is locked
• Prevent NewSqlEngine from panicking when a repo fails to open due to lock contention.
• Load-check the underlying DoltDB in engine DB collection and return the recorded load error (e.g.
nbs.ErrDatabaseLocked) instead of nil-dereferencing.
• Enables embedded/driver callers to detect “database locked” and safely retry engine initialization. - 10376: /go/libraries/doltcore/sqle/dsess/autoincrement_tracker.go: remove print statement
- 10375: Fix embedded DB lock contention by closing nocache DoltDBs and plumbing DBLoadParams through CREATE DATABASE / clone / stats
Summary
This change fixes an embedded / driver reopen failure where a two-phase flow (CREATE DATABASE then later Ping) could still
hit the database is locked by another dolt process even with disable_singleton_cache + fail-fast lock behavior enabled.
Key updates:
• Close underlying DoltDBs in nocache mode: sqle.DoltDatabaseProvider.Close() now closes all underlying *doltdb.DoltDB
instances when DisableSingletonCacheParam is in effect, ensuring .dolt/noms/LOCK is released on engine shutdown.
• Plumb DBLoadParams into provider-managed DB creation paths:
• CREATE DATABASE / UNDROP DATABASE now use env.LoadWithoutDB so DBLoadParams can be applied before any DB is opened.
• dolt_clone applies provider DBLoadParams to the clone env before registration.
• registerNewDatabase defensively applies provider DBLoadParams to the env.
• Propagate DBLoadParams into stats backing store: statspro now uses LoadWithoutDB for new stats storage and copies
DBLoadParams from the host env so embedded-mode settings apply consistently.
• Regression coverage: adds a test that creates a DB via the SQL engine, closes the engine, and asserts the LOCK is
released.
Why
Embedded callers rely on deterministic reopen semantics under contention. Previously, the SQL provider close path didn’t
close the underlying *doltdb.DoltDB for newly created DBs in nocache mode, leaving the journal manifest LOCK held and
causing retries / reopens to fail. - 10374: cache
strictLookupIter - 10371: go.mod,.github: Bump Go version to 1.25.6.
Fixes potential impact in some configurations from CVE-2025-61729. - 10366: Bug fix: don't set branch control admin privs when rebasing
Fixes #10352 - 10363: Add embedded-mode options to bypass DB cache and fail fast on journal lock contention
This PR adds opt-in controls for embedded/driver use-cases that need deterministic reopen semantics under contention:
• Disable local singleton DB caching for file-backed databases via a new dbfactory param, so each open constructs a fresh store.
• Fail fast on journal manifest lock timeout via a new dbfactory param, returning a sentinel error instead of falling back to read-only.
• Plumb DB load parameters through the SQL engine / env so an embedded driver can thread these options into Dolt database creation.
Key changes
• dbfactory:
• Adds DisableSingletonCacheParam (disable_singleton_cache)
• Adds FailOnJournalLockTimeoutParam (fail_on_journal_lock_timeout)
• Bypasses the singleton cache in FileFactory.CreateDB when disable_singleton_cache is set.
• nbs:
• Adds ErrDatabaseLocked
• Adds JournalingStoreOptions + NewLocalJournalingStoreWithOptions
• Updates newJournalManifest(..., failOnTimeout) to return ErrDatabaseLocked on lock timeout when enabled.
• SQL engine / env plumbing:
• Adds DBLoadParams to engine.SqlEngineConfig and threads them into env.DoltEnv prior to DB loading.
• Adds DBLoadParams to env.DoltEnv and merges them into both DB load paths (LoadDoltDB and LoadDoltDBWithParams).
• Updates env.MultiEnvForDirectory to respect a caller-provided *DoltEnv to avoid preloading before params are set. - 10314: Minimize binlog deletions in branch control
I didn't find a way to trigger multiple deletions through SQL on non-existent rows, but just in case we're somehow able to get to that point through some other bug, we put it in the binlog everytime. This way, we only add to the binlog if we actually had something to delete. - 9748: Bump gopkg.in/yaml.v3 from 3.0.0 to 3.0.1 in /integration-tests/transactions
Bumps gopkg.in/yaml.v3 from 3.0.0 to 3.0.1.
go-mysql-server
- 3406: Set table names to lower when creating table map
fixes #10385 - 3404: Bump google.golang.org/protobuf from 1.28.1 to 1.33.0
Bumps google.golang.org/protobuf from 1.28.1 to 1.33.0. - 3403: /go.{mod,sum}: bump go version
- 3399: GMS warning on functional indices
Makes it socreate indexqueries with expression argument will produce a warning instead of an error. - 3396: avoid converting
float64tofloat64
We save on conversion costs by avoiding a call to convert float64 to float64.
Unfortunately this has little to no impact on any of our benchmarks becausegroupbyIteruses concurrency.
benchmarks: #10359 (comment) - 3395: fewer
strings.ToLowercalls ingatherTableAlias
benchmarks: #10355 (comment) - 3388: Apply filter simplifications to Join condition
part of #10284
part of #10335 - 3386: Push filters that contain references to outer/lateral scopes.
We attempt to push filter expressions deeper into the tree so that they can reduce the size of intermediate iterators. Ideally, we want to push filters to directly above the data source that they reference.
Previously, we only pushed filters if they only referenced a single table, since pushing a filter that referenced multiple tables could potentially move the filter to a location where one of the referenced tables is no longer in scope. However, if the extra table references refer to a table in an outer scope or lateral scope, pushing the filter is completely safe. GetFields that reference an outer or lateral scope can be effectively treated as literals for the purpose of this optimization.
This PR changesgetFiltersByTable, a function that maps tables onto the filters that reference those tables. Previously it would ignore filters that reference multiple tables. Now, it allows those filters provided that the extra references are to outer/lateral scopes.
This improves many of the plan tests:- The changed test in tpch_plans.go pushes a filter into the leftmost table lookup
- The second changed test in query_plans.go replaces a naive InnerJoin with a CrossHashjoin
- integration_plans.go shows many queries that now have an IndexedTableAccess instead of a table scan, or where we push a filter deeper into a join.
A small number of neutral / slightly negative changes: - One of the changes in integration_plans.go introduces a redundant filter that was previously being removed. In practice this is pretty benign because filters rarely impact the runtime unless they require type conversions.
- The first changed test in query_plans.go replaces a LookupJoin with a LateralCrossJoin on an IndexedTableAccess. These two plans are effectively equivalent, but the LateralCrossJoin is harder to analyze, has a larger estimated cost and larger row estimate, and could in theory inhibit subsequent optimizations. I imagine we could create a new analysis pass that converts this kind of LateralCrossJoin into a LookupJoin.
- 3379: Create scope mapping for views
This is a partial fix for /issues/10297
When parsing a subquery alias, we create a new column id for each column in the SQA schema. The scope mapping is a dictionary on the SQA node that maps those column ids onto the expressions within the subquery that determine their values, and is used in some optimizations. For example, in order to push a filter into a subquery, we need to use the scope mapping to replace any GetFields that were pointing to the SQA with the expressions those fields map to. If for whatever reason the SQA doesn't have a scope mapping, we can't perform that optimization.
We parse views by recursively calling the parser on the view definition. This works but it means that the original parser doesn't have any references to the expressions inside the view, which prevents us from creating the scope mapping.
This PR attempts to fix this. Instead of defining the SQA columns in the original parser (where we no longer have access to the view's scope), we now create the columns while parsing the view, and attach them to the scope object for the view definition. Then we store that scope in a field on the Builder, so that the original parser can copy them into its own scope.
This feels hacky, but was the best way I could think of to generate the scope mappings and ensure they're visible outside the view. - 2103: Bump google.golang.org/grpc from 1.53.0 to 1.56.3
Bumps [google.golang.org/grpc](https://g...
1.81.2
Merged PRs
dolt
- 10357: Prevent temporary table name collisions
fixes #10353
MySQL allows creating temporary tables with the same names as existing non-temporary tables and creating non-temporary tables with the same names as existing temporary tables. However, temporary tables cannot have the same name as other temporary tables.
Our previous implementation seemed to have assumed temporary tables with the same names were allowed. Each session had a map of db names to an array of tables, and new temporary tables were added to the end of the array, without checking if there was a table with an existing name. When fetching or dropping a temporary table, we iterated through the array and returned/dropped the first table with a matching name; this meant even though we allowed temporary tables with the same name, we only ever operated on whichever one was created first.
Since temporary tables with the same names are actually not allowed, the array of temporary tables was replaced with a name-to-table map to make fetching a temporary table withGetTemporaryTablefaster. This also makesDropTemporaryTablefaster. This does makeGetAllTemporaryTablesslower since we now have to iterate over the mappings to create an array for temporary tables, butGetAllTemporaryTablesdoesn't seem to be called as frequently asGetTemporaryTable. - 10349: Remove requirement for -i in dolt_rebase
Hold over from the original impl of rebase now addressed! - 10274:
dolt stash apply
Adds #10131
Addsdolt stash apply. Functions similar todropandpopin usage: accepts no argument for most recent stash, orintorstash@{int}for non-top stashes. i.e.dolt stash apply 2ordolt stash apply stash@{2}. - 10227: #5862: Add ignore system table
Summary
Adds a newdolt_status_ignoredsystem table that extendsdolt_statusfunctionality by including anignoredcolumn to identify which unstaged tables match patterns defined indolt_ignore. This provides a SQL interface equivalent todolt status --ignored.Changes
New System Table
dolt_status_ignored- Same schema asdolt_statusplus anignoredcolumn:table_name(text): Name of the tablestaged(boolean): Whether the table is stagedstatus(text): Status of the table (e.g., "new table", "modified", "conflict")ignored(boolean): Whether the table matches adolt_ignorepattern
Implementation Details
The implementation started with logic similar tostatus_table.goand was then refactored to share common code between both tables while adding the ignore-checking functionality.
go/libraries/doltcore/doltdb/system_table.go- Added
StatusIgnoredTableNameconstant - Registered table in
GeneratedSystemTableNames()andDoltGeneratedTableNames
go/libraries/doltcore/sqle/database.go - Added switch case for
StatusIgnoredTableNameingetTableInsensitiveWithRoot() - Extracted
getStatusTableRootsProvider()helper to eliminate duplicate logic betweendolt_statusanddolt_status_ignoredswitch cases
go/libraries/doltcore/sqle/dtables/status_table.go - Added
getStatusRowsData()function using existingstatusTableRowstruct to share status collection logic between both tables - Refactored
newStatusItr()to use shared function
go/libraries/doltcore/sqle/dtables/status_ignored_table.go (new file) - Implements
StatusIgnoredTableandStatusIgnoredItr - Uses shared
getStatusRowsData()fromstatus_table.go - Uses adapter pattern via
DoltTableAdapterRegistrysimilar toStatusTable - Adds ignore-checking logic via helper functions:
getIgnorePatterns(): Fetches patterns fromdolt_ignorebuildUnstagedTableNameSet(): Creates set for quick lookupcheckIfIgnored(): Checks if table matches ignore pattern (returns error on failure)
Behavior
- Unstaged tables:
ignored=1if table name matches a pattern indolt_ignore,ignored=0otherwise - Staged tables: Always
ignored=0(staging overrides ignore patterns, matching git behavior) - Constraint violations, merge conflicts, schema conflicts: Always
ignored=0
Tests
BATS integration tests
integration-tests/bats/system-tables.bats:- Basic functionality: Verifies
ignoredcolumn correctly identifies ignored tables while non-ignored tables haveignored=0 - Staged tables: Confirms staged tables always have
ignored=0regardless of ignore patterns - System table visibility: Ensures table appears in
dolt ls --system
integration-tests/bats/ls.bats: - Updated "ls: --system shows system tables" to include
dolt_status_ignored(27 tables instead of 26)
Go enginetests
go/libraries/doltcore/sqle/enginetest/dolt_queries.go:
Closes #5862
go-mysql-server
- 3395: fewer
strings.ToLowercalls ingatherTableAlias
benchmarks: #10355 (comment) - 3393: add
transform.InspectWithOpaquefunction
This changestransform.Inspectto not apply the helper function on children ofsql.OpaqueNodes.
Additionally, it adds atransform.InspectWithOpaquethat does.
This is to matchtransform.Nodeandtransform.NodeWithOpaque.
There are still some inconsistencies between the different transform helper functions:- transform.Node:
- post order (applies to node.Children then node)
- no way to break out early
- transform.Inspect:
- pre order (applies to node then node.Children)
- can break out early (only on all or none of children)
- return true to continue
- transform.InspectExpr:
- post order (applies to expr.Children then expr)
- can break out early, including stopping during children
- return false to continue
- 3389: remove
convertLeftAndRight
This PR removesc.convertLeftAndRight, which avoids calls toc.Left().Type()andc.Right().Type().
Not entirely sure why receiver methods would impact performance this much, but benchmarks say so.
Benchmarks: #10342 (comment)
Closed Issues
1.81.1
Merged PRs
dolt
- 10347: fixed output bug for foreign key constraints
go-mysql-server
- 3389: remove
convertLeftAndRight
This PR removesc.convertLeftAndRight, which avoids calls toc.Left().Type()andc.Right().Type().
Not entirely sure why receiver methods would impact performance this much, but benchmarks say so.
Benchmarks: #10342 (comment) - 3384: Look at every join node parent when computing outer scopes.
Previously, when building scopes for subqueries that appear inside joins, we would only track a single parent join node. If the subquery had multiple join parents, we would only be able to resolve references to the innermost subquery. This inhibits the optimizations we can perform.
This PR uses a custom tree walker to track a list of parent join nodes, and includes an example of a query that was not previously possible to optimize. - 3370: Wrap
errgroup.Group.Go()calls for consistent panic recovery
Each spawned goroutine needs a panic recovery handler to prevent an unexpected panic from crashing the entire Go process. This change introduces a helper function that wraps goroutine creation througherrgroup.Groupand installs a panic recovery handler and updates existing code to use it.
vitess
- 448: Move
SERIALtype out of numeric type definitions so thatunsignedvalue is not overwritten
fixes #10345
MySQL docs - 446: Added DISTINCT ON expressions
Related PRs: