Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove re-slicing for multipart #931

Merged
merged 7 commits into from
Mar 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions api/data/tree.go
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@ type MultipartInfo struct {
Created time.Time
Meta map[string]string
CopiesNumber uint32
SplitID string
cthulhu-rider marked this conversation as resolved.
Show resolved Hide resolved
}

// PartInfo is upload information about part.
Expand All @@ -89,6 +90,13 @@ type PartInfo struct {
Created time.Time
// Server creation time.
ServerCreated time.Time

// MultipartHash contains internal state of the [hash.Hash] to calculate whole object payload hash.
MultipartHash []byte
// HomoHash contains internal state of the [hash.Hash] to calculate whole object homomorphic payload hash.
HomoHash []byte
// Elements contain [oid.ID] object list for the current part.
Elements []oid.ID
}

// ToHeaderString form short part representation to use in S3-Completed-Parts header.
Expand Down
3 changes: 1 addition & 2 deletions api/handler/acl.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@
"encoding/json"
"encoding/xml"
"errors"
stderrors "errors"
"fmt"
"net/http"
"sort"
Expand Down Expand Up @@ -1464,7 +1463,7 @@

for _, grant := range acp.AccessControlList {
if !isValidGrant(grant) {
return nil, stderrors.New("unsupported grantee")
return nil, errors.New("unsupported grantee")

Check warning on line 1466 in api/handler/acl.go

View check run for this annotation

Codecov / codecov/patch

api/handler/acl.go#L1466

Added line #L1466 was not covered by tests
}
if grant.Grantee.ID == acp.Owner.ID {
found = true
Expand Down
8 changes: 4 additions & 4 deletions api/handler/handlers_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -78,14 +78,14 @@ func prepareHandlerContext(t *testing.T) *handlerContext {
require.NoError(t, err)
anonSigner := user.NewAutoIDSignerRFC6979(anonKey.PrivateKey)

signer := user.NewAutoIDSignerRFC6979(key.PrivateKey)
owner := signer.UserID()

l := zap.NewExample()
tp := layer.NewTestNeoFS()
tp := layer.NewTestNeoFS(signer)

testResolver := &contResolver{layer: tp}

signer := user.NewAutoIDSignerRFC6979(key.PrivateKey)
owner := signer.UserID()

layerCfg := &layer.Config{
Caches: layer.DefaultCachesConfigs(zap.NewExample()),
GateKey: key,
Expand Down
9 changes: 9 additions & 0 deletions api/layer/layer.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
"net/url"
"strconv"
"strings"
"sync"
"time"

"github.com/nats-io/nats.go"
Expand Down Expand Up @@ -50,6 +51,7 @@
ncontroller EventListener
cache *Cache
treeService TreeService
buffers *sync.Pool
}

Config struct {
Expand Down Expand Up @@ -266,13 +268,20 @@
// NewLayer creates an instance of a layer. It checks credentials
// and establishes gRPC connection with the node.
func NewLayer(log *zap.Logger, neoFS NeoFS, config *Config) Client {
buffers := sync.Pool{}
buffers.New = func() any {
b := make([]byte, neoFS.MaxObjectSize())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, a potential OOM killer is here, the same way we already fixed this in SDK

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Links to changes, please

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using sync.Pool showed better memory using profile after load tests. It Already works here

data := x.buffers.Get()

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Links to changes, please

nspcc-dev/neofs-sdk-go#539

Using sync.Pool showed better memory using profile after load tests. It Already works here

yes, but putting 1K objects 1-byte sized simultaneously leads to 64GB memory usage but requires only 1KB in fact. we already know we may be quite successive at running out of 1TB memory, not a cool story

but ok, as i understand we will return to this later

return &b
}

Check warning on line 275 in api/layer/layer.go

View check run for this annotation

Codecov / codecov/patch

api/layer/layer.go#L273-L275

Added lines #L273 - L275 were not covered by tests

return &layer{
neoFS: neoFS,
log: log,
anonymous: config.Anonymous,
resolver: config.Resolver,
cache: NewCache(config.Caches),
treeService: config.TreeService,
buffers: &buffers,
}
}

Expand Down
Loading
Loading