Thanks to visit codestin.com
Credit goes to github.com

Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions server/container_create.go
Original file line number Diff line number Diff line change
Expand Up @@ -412,8 +412,6 @@ func hostNetwork(containerConfig *types.ContainerConfig) bool {
func (s *Server) CreateContainer(ctx context.Context, req *types.CreateContainerRequest) (res *types.CreateContainerResponse, retErr error) {
log.Infof(ctx, "Creating container: %s", translateLabelsToDescription(req.Config.Labels))

s.updateLock.RLock()
defer s.updateLock.RUnlock()
Comment on lines -415 to -416
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we be sure that everything on top of this stack is properly locked?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH, I don't know. I was proposing this commit simply because of this observation (unexported RWMutex only ever being read-locked):

klitkey1-mobl cri-o $ git grep updateLock HEAD^
HEAD^:server/container_create.go:       s.updateLock.RLock()
HEAD^:server/container_create.go:       defer s.updateLock.RUnlock()
HEAD^:server/sandbox_run_linux.go:      s.updateLock.RLock()
HEAD^:server/sandbox_run_linux.go:      defer s.updateLock.RUnlock()
HEAD^:server/server.go: updateLock sync.RWMutex

Unless it is already broken, removing that lock should not break anything.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I agree. There is always the risk that we reveal some missing lower level locks which would cause a regression. I think we should be fine removing those locks. @haircommander WDYT?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I agree if we're only ever read locking it it doesn't seem like we need it.

sb, err := s.getPodSandboxFromRequest(req.PodSandboxID)
if err != nil {
if err == sandbox.ErrIDEmpty {
Expand Down
3 changes: 0 additions & 3 deletions server/sandbox_run_linux.go
Original file line number Diff line number Diff line change
Expand Up @@ -269,9 +269,6 @@ func (s *Server) getSandboxIDMappings(sb *libsandbox.Sandbox) (*idtools.IDMappin
}

func (s *Server) runPodSandbox(ctx context.Context, req *types.RunPodSandboxRequest) (resp *types.RunPodSandboxResponse, retErr error) {
s.updateLock.RLock()
defer s.updateLock.RUnlock()

sbox := sandbox.New()
if err := sbox.SetConfig(req.Config); err != nil {
return nil, errors.Wrap(err, "setting sandbox config")
Expand Down
2 changes: 0 additions & 2 deletions server/server.go
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,6 @@ type Server struct {
monitorsChan chan struct{}
defaultIDMappings *idtools.IDMappings

updateLock sync.RWMutex

// pullOperationsInProgress is used to avoid pulling the same image in parallel. Goroutines
// will block on the pullResult.
pullOperationsInProgress map[pullArguments]*pullOperation
Expand Down