forked from minio/minio
-
Couldn't load subscription status.
- Fork 0
[pull] release from minio:release #294
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
pull
wants to merge
1,169
commits into
szaydel:release
Choose a base branch
from
minio:release
base: release
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
TCP_QUICKACK is a setting that allows TCP endpoints to acknowledge the receipt of data instantly in situations where they would normally wait to see if more data would be arriving. https://assets.extrahop.com/whitepapers/TCP-Optimization-Guide-by-ExtraHop.pdf
Updates needed dependency as well. Fixes #11416
If a disk is skipped when nil it is still returned.
```
minio server /tmp/disk{1...4}
mc mb myminio/testbucket/
mkdir -p /tmp/disk{1..4}/testbucket/test-prefix/
```
This would end up being listed in the current
master, this PR fixes this situation.
If a directory is a leaf dir we should it
being listed, since it cannot be deleted anymore
with DeleteObject, DeleteObjects() API calls
because we natively support directories now.
Avoid listing it and let healing purge this folder
eventually in the background.
This commit refactors the SSE implementation and add
S3-compatible SSE-KMS context handling.
SSE-KMS differs from SSE-S3 in two main aspects:
1. The client can request a particular key and
specify a KMS context as part of the request.
2. The ETag of an SSE-KMS encrypted object is not
the MD5 sum of the object content.
This commit only focuses on the 1st aspect.
A client can send an optional SSE context when using
SSE-KMS. This context is remembered by the S3 server
such that the client does not have to specify the
context again (during multipart PUT / GET / HEAD ...).
The crypto. context also includes the bucket/object
name to prevent renaming objects at the backend.
Now, AWS S3 behaves as following:
- If the user does not provide a SSE-KMS context
it does not store one - resp. does not include
the SSE-KMS context header in the response (e.g. HEAD).
- If the user specifies a SSE-KMS context without
the bucket/object name then AWS stores the exact
context the client provided but adds the bucket/object
name internally. The response contains the KMS context
without the bucket/object name.
- If the user specifies a SSE-KMS context with
the bucket/object name then AWS again stores the exact
context provided by the client. The response contains
the KMS context with the bucket/object name.
This commit implements this behavior w.r.t. SSE-KMS.
However, as of now, no such object can be created since
the server rejects SSE-KMS encryption requests.
This commit is one stepping stone for SSE-KMS support.
Co-authored-by: Harshavardhana <[email protected]>
- using miniogo.ObjectInfo.UserMetadata is not correct - using UserTags from Map->String() can change order - ContentType comparison needs to be removed. - Compare both lowercase and uppercase key names. - do not silently error out constructing PutObjectOptions if tag parsing fails - avoid notification for empty object info, failed operations should rely on valid objInfo for notification in all situations - optimize copyObject implementation, also introduce a new replication event - clone ObjectInfo() before scheduling for replication - add additional headers for comparison - remove strings.EqualFold comparison avoid unexpected bugs - fix pool based proxying with multiple pools - compare only specific metadata Co-authored-by: Poorna Krishnamoorthy <[email protected]>
- minio-go -> v7.0.8 - ldap/v3 -> v3.2.4 - reedsolomon -> v1.9.11 - sio-go -> v0.3.1 - msgp -> v1.1.5 - simdjson-go, md5-simd, highwayhash
for some flaky networks this may be too fast of a value choose a defensive value, and let this be addressed properly in a new refactor of dsync with renewal logic. Also enable faster fallback delay to cater for misconfigured IPv6 servers refer - https://golang.org/pkg/net/#Dialer - https://tools.ietf.org/html/rfc6555
few places were still using legacy call GetObject() which was mainly designed for client response writer, use GetObjectNInfo() for internal calls instead.
When a directory object is presented as a `prefix` param our implementation tend to only list objects present common to the `prefix` than the `prefix` itself, to mimic AWS S3 like flat key behavior this PR ensures that if `prefix` is directory object, it should be automatically considered to be part of the eventual listing result. fixes #11370
After recent refactor where lifecycle started to rely on ObjectInfo to make decisions, it turned out there are some issues calculating Successor Modtime and NumVersions, hence the lifecycle is not working as expected in a versioning bucket in some cases. This commit fixes the behavior.
We use multiple libraries in health info, but the returned error does not indicate exactly what library call is failing, hence adding named tags to returned errors whenever applicable.
Replaces #11449 Does concurrent healing but limits concurrency to 50 buckets. Aborts on first error. `errgroup.Group` is extended to facilitate this in a generic way.
When you have heirarchy of prefixes with directory objects
our current master would list directory objects as prefixes
when delimiter is present, this is inconsistent with AWS S3
```
aws s3api list-objects --endpoint-url http://localhost:9000 \
--profile minio --bucket testbucket-v --prefix new/ --delimiter /
{
"CommonPrefixes": [
{
"Prefix": "new/"
},
{
"Prefix": "new/new/"
}
]
}
```
Instead this PR fixes this to behave like AWS S3
```
aws s3api list-objects --endpoint-url http://localhost:9000 \
--profile minio --bucket testbucket-v --prefix new/ --delimiter /
{
"Contents": [
{
"Key": "new/",
"LastModified": "2021-02-05T06:27:42.660Z",
"ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
"Size": 0,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "",
"ID": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4"
}
}
],
"CommonPrefixes": [
{
"Prefix": "new/new/"
}
]
}
```
- lock maintenance loop was incorrectly sleeping as well as using ticker badly, leading to extra expiration routines getting triggered that could flood the network. - multipart upload cleanup should be based on timer instead of ticker, to ensure that long running jobs don't get triggered twice. - make sure to get right lockers for object name
The connections info of the processes takes up a huge amount of space, and is not important for adding any useful health checks. Removing it will significantly reduce the size of the subnet health report.
When lifecycle decides to Delete an object and not a version in a versioned bucket, the code should create a delete marker and not removing the scanned version. This commit fixes the issue.
This reverts commit 922c7b5.
For large objects taking more than '3 minutes' response times in a single PUT operation can timeout prematurely as 'ResponseHeader' timeout hits for 3 minutes. Avoid this by keeping the connection active during CreateFile phase.
use a single call to remove directly at disk instead of doing recursively at network layer.
Add metric for canceled requests
an optimization to avoid extra syscalls in PutObject(), adds up to our PutObject response times.
This commit updates the highwayhash version to `v1.0.2` that fixes a critical issue on arm64.
Bonus: Prealloc reasonable sizes for metrics.
Thanks to @dvaldivia for reproducing this
replication didn't work as expected when deletion of delete markers was requested in DeleteMultipleObjects API, this is due to incorrect lookup elements being used to look for delete markers.
service accounts were not inheriting parent policies anymore due to refactors in the PolicyDBGet() from the latest release, fix this behavior properly.
Fix accessing claims when auth error is unchecked. Only replaced when unchecked and when clearly without side effects. Fixes #11959
Protect updated members in xlStorage.
```
WARNING: DATA RACE
Write at 0x00c004b4ee78 by goroutine 1491:
github.com/minio/minio/cmd.(*xlStorage).GetDiskID()
d:/minio/minio/cmd/xl-storage.go:590 +0x1078
github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).checkDiskStale()
d:/minio/minio/cmd/xl-storage-disk-id-check.go:195 +0x84
github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).StatVol()
d:/minio/minio/cmd/xl-storage-disk-id-check.go:284 +0x16a
github.com/minio/minio/cmd.erasureObjects.getBucketInfo.func1()
d:/minio/minio/cmd/erasure-bucket.go:100 +0x1a5
github.com/minio/minio/pkg/sync/errgroup.(*Group).Go.func1()
d:/minio/minio/pkg/sync/errgroup/errgroup.go:122 +0xd7
Previous read at 0x00c004b4ee78 by goroutine 1087:
github.com/minio/minio/cmd.(*xlStorage).CheckFile.func1()
d:/minio/minio/cmd/xl-storage.go:1699 +0x384
github.com/minio/minio/cmd.(*xlStorage).CheckFile()
d:/minio/minio/cmd/xl-storage.go:1726 +0x13c
github.com/minio/minio/cmd.(*xlStorageDiskIDCheck).CheckFile()
d:/minio/minio/cmd/xl-storage-disk-id-check.go:446 +0x23b
github.com/minio/minio/cmd.erasureObjects.parentDirIsObject.func1()
d:/minio/minio/cmd/erasure-common.go:173 +0x194
github.com/minio/minio/pkg/sync/errgroup.(*Group).Go.func1()
d:/minio/minio/pkg/sync/errgroup/errgroup.go:122 +0xd7
```
Multiple disks from the same set would be writing concurrently.
```
WARNING: DATA RACE
Write at 0x00c002100ce0 by goroutine 166:
github.com/minio/minio/cmd.(*erasureSets).connectDisks.func1()
d:/minio/minio/cmd/erasure-sets.go:254 +0x82f
Previous write at 0x00c002100ce0 by goroutine 129:
github.com/minio/minio/cmd.(*erasureSets).connectDisks.func1()
d:/minio/minio/cmd/erasure-sets.go:254 +0x82f
Goroutine 166 (running) created at:
github.com/minio/minio/cmd.(*erasureSets).connectDisks()
d:/minio/minio/cmd/erasure-sets.go:210 +0x324
github.com/minio/minio/cmd.(*erasureSets).monitorAndConnectEndpoints()
d:/minio/minio/cmd/erasure-sets.go:288 +0x244
Goroutine 129 (finished) created at:
github.com/minio/minio/cmd.(*erasureSets).connectDisks()
d:/minio/minio/cmd/erasure-sets.go:210 +0x324
github.com/minio/minio/cmd.(*erasureSets).monitorAndConnectEndpoints()
d:/minio/minio/cmd/erasure-sets.go:288 +0x244
```
This change fixes handling of these types of queries:
- Double quoted column names with special characters:
SELECT "column.name" FROM s3object
- Double quoted column names with reserved keywords:
SELECT "CAST" FROM s3object
- Table name as prefix for column names:
SELECT S3Object."CAST" FROM s3object
This PR fixes - close leaking bandwidth report channel leakage - remove the closer requirement for bandwidth monitor instead if Read() fails remember the error and return error for all subsequent reads. - use locking for usage-cache.bin updates, with inline data we cannot afford to have concurrent writes to usage-cache.bin corrupting xl.meta
It is inefficient to decide to heal an object before checking its lifecycle for expiration or transition. This commit will just reverse the order of action: evaluate lifecycle and heal only if asked and lifecycle resulted a NoneAction.
Because of silly AWS S3 behavior we to handle both types. fixes #11920
This commit fixes a bug in the put-part implementation. The SSE headers should be set as specified by AWS - See: https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html Now, the MinIO server should set SSE-C headers, like `x-amz-server-side-encryption-customer-algorithm`. Fixes #11991
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )