Commit Graph

25 Commits

Author SHA1 Message Date
Anis Eleuch 95bf4a57b6
logging: Add subsystem to log API (#19002)
Create new code paths for multiple subsystems in the code. This will
make maintaing this easier later.

Also introduce bugLogIf() for errors that should not happen in the first
place.
2024-04-04 05:04:40 -07:00
Harshavardhana 1d3bd02089
avoid close 'nil' panics if any (#18890)
brings a generic implementation that
prints a stack trace for 'nil' channel
closes(), if not safely closes it.
2024-01-28 10:04:17 -08:00
Klaus Post ff5988f4e0
Reduce allocations (#17584)
* Reduce allocations

* Add stringsHasPrefixFold which can compare string prefixes, while ignoring case and not allocating.
* Reuse all msgp.Readers
* Reuse metadata buffers when not reading data.

* Make type safe. Make buffer 4K instead of 8.

* Unslice
2023-07-06 16:02:08 -07:00
ferhat elmas 714283fae2
cleanup ignored static analysis (#16767) 2023-03-06 08:56:10 -08:00
Klaus Post 037fe4afdc
Add listing block reuse (#15579)
When streaming results, pool metadata slices when sent.
2022-08-24 09:11:16 -07:00
Harshavardhana f527c708f2
run gofumpt cleanup across code-base (#14015) 2022-01-02 09:15:06 -08:00
Harshavardhana 2f1e8ba612
add more directory marker tests and fix a bug (#13871)
ListObjects() should never list a delete-marked folder
if latest is delete marker and delimiter is not provided.

ListObjectVersions() should list a delete-marked folder
even if latest is delete marker and delimiter is not
provided.

Enhance further versioning listing on the buckets
2021-12-09 14:59:23 -08:00
Harshavardhana dcff6c996d
fix: do not list delete-marked objects (#13864)
delete marked objects should not be considered
for listing when listing is delimited, this issue
as introduced in PR #13804 which was mainly to
address listing of directories in listing when
delimited.

This PR fixes this properly and adds tests to
ensure that we behave in accordance with how
an S3 API behaves for ListObjects() without
versions.
2021-12-08 17:34:52 -08:00
Harshavardhana acf26c5ab7 re-arrange metacache struct to be optimal (#13609) 2021-11-08 10:26:08 -08:00
Klaus Post 1080609c86
Reuse buffers when writing metadata (#13040)
Simplify returning buffers.

Tested using `warp mixed --duration=1m --obj.size=100K`:

```
Operation: DELETE
Operations: 7148 -> 7642
* Average: +6.77% (+8.1) obj/s
-------------------
Operation: GET
Operations: 32200 -> 34403
* Average: +6.74% (+3.5 MiB/s) throughput, +6.74% (+36.2) obj/s
* First Byte: Average: -105.403µs (-3%), Median: -309µs (-11%), Best: -2.7µs (-0%), Worst: +3.5637ms (+3%)
-------------------
Operation: PUT
Operations: 10741 -> 11475
* Average: +6.78% (+1.2 MiB/s) throughput, +6.78% (+12.1) obj/s
-------------------
Operation: STAT
Operations: 21465 -> 22927
* Average: +6.71% (+24.0) obj/s
```
2021-08-23 11:17:27 -07:00
Klaus Post 7d8413a589
Reuse more metadata buffers (#12955)
Reuse metadata buffers when no longer referenced.

Takes care of most of the happy paths.
2021-08-13 11:39:27 -07:00
Klaus Post 89febdb3d6
Reuse small buffers (#12948)
When reading metadata allow reuse of buffers 
in certain cases. Take the low-hanging fruit.

Reduce GC overhead when listing.
2021-08-12 14:27:22 -07:00
Klaus Post 05aebc52c2
feat: Implement listing version 3.0 (#12605)
Co-authored-by: Harshavardhana <harsha@minio.io>
2021-07-05 15:34:41 -07:00
Harshavardhana 1f262daf6f
rename all remaining packages to internal/ (#12418)
This is to ensure that there are no projects
that try to import `minio/minio/pkg` into
their own repo. Any such common packages should
go to `https://github.com/minio/pkg`
2021-06-01 14:59:40 -07:00
Harshavardhana 32d8a48d4e
reduce memory usage in metacache reader (#12334) 2021-05-20 09:00:11 -07:00
Harshavardhana 069432566f update license change for MinIO
Signed-off-by: Harshavardhana <harsha@minio.io>
2021-04-23 11:58:53 -07:00
Klaus Post e0d3a8c1f4
Alloc less for metacache decompression (#12134)
Network streams are limited to 16K blocks. Don't alloc more upfront.

Signed-off-by: Klaus Post <klauspost@gmail.com>
2021-04-23 10:27:42 -07:00
Harshavardhana 5982965839
fix: re-use bytes.Buffer using sync.Pool (#11156) 2020-12-22 23:22:37 -08:00
Harshavardhana 35fafb837b
fix: issues with handling delete markers in metacache (#11150)
Additional cases handled

- fix address situations where healing is not
  triggered on failed writes and deletes.

- consider object exists during listing when
  metadata can be successfully decoded.
2020-12-22 09:16:43 -08:00
Klaus Post 0422eda6a2
metacache: Always close block writer (#10973)
In some cases a writer could be left behind unclosed, leaking compression blocks.

Always close and set compression concurrency to 2 which should be fine to keep up.
2020-11-25 09:37:30 -08:00
Klaus Post a75fafdbe2
Remove msgp workaround (#10964)
The error in `github.com/philhofer/fwd` was quickly fixed through 
https://github.com/philhofer/fwd/pull/22 - update the dependency 
and remove the workaround.
2020-11-24 11:58:10 -08:00
Klaus Post a58b7874ef
Temporary workaround for msgp skipping (#10960)
Due to https://github.com/philhofer/fwd/issues/20 when skipping a metadata entry that is >2048 bytes and the buffer is full (2048 bytes) the skip will fail with `io.ErrNoProgress`.

Enlarge the buffer so we temporarily make this much more unlikely.

If it still happens we will have to rewrite the skips to reads.

Fixes #10959
2020-11-23 18:51:59 -08:00
Harshavardhana ca88ca753c
ignore typed errors correctly in list cache layer (#10879)
bonus write bucket metadata cache with enough quorum

Possible fix for #10868
2020-11-12 09:28:56 -08:00
Klaus Post d1e1205036
metacache: Always close the s2 writer (#10836)
The s2 writer could be leaked if there was an error.

Make sure it is always closed.
2020-11-05 07:30:14 -08:00
Klaus Post a982baff27
ListObjects Metadata Caching (#10648)
Design: https://gist.github.com/klauspost/025c09b48ed4a1293c917cecfabdf21c

Gist of improvements:

* Cross-server caching and listing will use the same data across servers and requests.
* Lists can be arbitrarily resumed at a constant speed.
* Metadata for all files scanned is stored for streaming retrieval.
* The existing bloom filters controlled by the crawler is used for validating caches.
* Concurrent requests for the same data (or parts of it) will not spawn additional walkers.
* Listing a subdirectory of an existing recursive cache will use the cache.
* All listing operations are fully streamable so the number of objects in a bucket no 
  longer dictates the amount of memory.
* Listings can be handled by any server within the cluster.
* Caches are cleaned up when out of date or superseded by a more recent one.
2020-10-28 09:18:35 -07:00