Tags: gosom/cockroach
Tags
backupccl: change CPUT semantics when publishing restored tables Previously we asserted that the descriptors that we're publishing PUBLIC have not changed since they were added in the OFFLINE state. There are cases where this assertion is failing. This is not ideal since we perform this check at the end of what could be a potentially lengthy restore. This assertion is enforced via a CPut and so to prevent these last stage failures we now resort to first reading the KV stored table desc, mutating its state and then writing it back to KV. To further collect information on what is changing the desc while it is in an offline state, logging has been added. Informs: cockroachdb#55690 Release note: None
Merge pull request cockroachdb#55522 from rafiss/backport20.2-55472 release-20.2: sql: remove virtual index on information_schema.tables(table_name)
Merge pull request cockroachdb#55305 from solongordon/backport20.2-55292 release-20.2: sql: do not inherit role options
Merge pull request cockroachdb#55168 from asubiotto/fix-54837 release-20.1: colexec: add mutex to colBatchScan
Merge pull request cockroachdb#54977 from rytaft/backport19.2-54950 release-19.2: build: reduce verbosity and bump timeout for SQL Race Logic Test
sql: add memory accounting for fetches This commit adds a memory monitor to the kv fetcher infrastructure that's initialized by its users. When kv fetches occur, the new infrastructure ensure that there's always at least 1 kilobyte allocated for the fetch before it happens. Once the fetch returns, the accounting is adjusted to include the entire size of the fetch. Subsequent fetches that return less memory do *not* ratchet down the allocation size to preserve safety and reduce some pointless allocation adjustment churning. This memory monitoring is gated behind a new cluster setting called `sql.scan_memory_accounting.enabled` in 20.1. In 20.2, this memory monitoring will always be on, and no such cluster setting will exist. Release note (sql change): the memory used by disk scans is accounted for, reducing the likelihood of out of memory conditions that result in process crashes as opposed to SQL out of memory error messages. This new behavior is off by default, and is gated behind the sql.scan_memory_accounting.enabled cluster setting. Release justification: fixes for high-priority bugs in existing functionality
Merge pull request cockroachdb#54800 from itsbilal/bump-pebble-20.2 release-20.2: vendor: Bump pebble to 0602c847ff9df5c90a9611a73c5b4b0cef6fff39
Merge pull request cockroachdb#54659 from dhartunian/backport20.2-52609 release-20.2: auth: Add OIDC as a login option for Admin UI
kvserver: reintroduce RangeDesc.GenerationComparable We dropped this field recently, but unfortunately that wasn't safe for mixed-version clusters. The rub is that 20.1 nodes need to roundtrip the proto through 20.2 nodes in a fairly subtle way. When it comes back to the 20.1 node, the descriptor needs to compare Equal() to the original. We configure our protos to not preserve unrecognized fields, so removing the field breaks this round-tripping. Specifically, the scenario which broke is the following: 1. A 20.1 node processes an AdminSplit, and performs the tranction writing the new descriptors. The descriptors have the GenerationCompable field set. 2. The lease changes while the respective txn is running. The lease moves to a 20.2 node. 3. The 20.2 node evaluates the EndTxn, and computes the split trigger info that's going to be replicated. The EndTxn has split info in it containing the field set, but the field is dropped when converting that into the proposed SplitTrigger (since the 20.2 unmarshalls and re-marshalls the descriptors). 4. All the 20.1 replicas of the ranges involved now apply the respective trigger via Raft, and their in-memory state doesn't have the field set. This doesn't match the bytes written in the database, which have the field. 5. The discrepancy between the in-memory state and the db state is a problem, as it causes the 20.1 node to spin if it tries to perform subsequent merge/split operations. The reason is that the code performing these operations short-circuits itself if it detects that the descriptor has changed while the operation was running. This detection is performed via the generated Equals() method, and it mis-fires because of the phantom field. That detection happens here: https://github.com/cockroachdb/cockroach/blob/79c01d28da9c379f67bb41beef3d85ad3bee1da1/pkg/kv/kvserver/replica_command.go#L1957 This patch takes precautions so that we can remove the field again in 21.1 - I'm merging this in 21.1, I'll backport it to 20.2, and then I'll come back to 20.2 and remove the field. Namely, the patch changes RangeDesc.Equal() to ignore that field (and the method is no longer generated). Fixes cockroachdb#53535 Release note: None
Merge pull request cockroachdb#54418 from yuzefovich/fix-upsert-20.1 release-20.1: sql: fix large UPSERTs
PreviousNext