Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Docker slowness on Mac #4042

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ThomasLabstep opened this issue May 17, 2021 · 3 comments
Closed

Docker slowness on Mac #4042

ThomasLabstep opened this issue May 17, 2021 · 3 comments
Assignees
Labels
status: response required Waiting for a response from the reporter

Comments

@ThomasLabstep
Copy link

ThomasLabstep commented May 17, 2021

Type of request: This is a ...

[ ] Performance issue

Detailed description

time curl http://localhost:4566/restapis/vhupi9pjeg/local/_user_request_/hello

The lambda is a hello world with serverless framework.

Expected behavior

~50ms

Actual behavior

7000ms (not just the first time)

Steps to reproduce

Command used to start LocalStack

docker-compose up

version: '2.1'
services:
  localstack:
    image: localstack/localstack
    ports:
      - "4566:4566"
      - "4571:4571"
    environment:
      - DEBUG=1
      - DATA_DIR=/tmp/localstack
      - LAMBDA_EXECUTOR=docker-reuse
      - LAMBDA_REMOTE_DOCKER=false
      - HOST_TMP_FOLDER=${PWD}/.localstack
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"

Client code (AWS SDK code snippet, or sequence of "awslocal" commands)

Localstack logs:

localstack_1  | 2021-05-17T19:23:38:INFO:localstack.services.awslambda.lambda_executors: Running lambda cmd:  docker exec -e AWS_NODEJS_CONNECTION_REUSE_ENABLED="$AWS_NODEJS_CONNECTION_REUSE_ENABLED" -e AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" -e AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" -e AWS_REGION="$AWS_REGION" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e EDGE_PORT="$EDGE_PORT" -e _HANDLER="$_HANDLER" -e AWS_LAMBDA_FUNCTION_TIMEOUT="$AWS_LAMBDA_FUNCTION_TIMEOUT" -e AWS_LAMBDA_FUNCTION_NAME="$AWS_LAMBDA_FUNCTION_NAME" -e AWS_LAMBDA_FUNCTION_VERSION="$AWS_LAMBDA_FUNCTION_VERSION" -e AWS_LAMBDA_FUNCTION_INVOKED_ARN="$AWS_LAMBDA_FUNCTION_INVOKED_ARN" -e AWS_LAMBDA_COGNITO_IDENTITY="$AWS_LAMBDA_COGNITO_IDENTITY" -e NODE_TLS_REJECT_UNAUTHORIZED="$NODE_TLS_REJECT_UNAUTHORIZED" -e _LAMBDA_SERVER_PORT="$_LAMBDA_SERVER_PORT" localstack_lambda_arn_aws_lambda_us-east-1_000000000000_function_pathofchild-graphql-local-hello /var/rapid/init --bootstrap /var/runtime/bootstrap --enable-msg-logs .webpack/service/lib/handler.hello
localstack_1  | 2021-05-17T19:23:45:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-1:000000000000:function:pathofchild-graphql-local-hello result / log output:
localstack_1  | null
localstack_1  | > START RequestId: 35a0c8d2-cf41-1be7-86ae-81e700afced8 Version: $LATEST
localstack_1  | > END RequestId: 35a0c8d2-cf41-1be7-86ae-81e700afced8
localstack_1  | > REPORT RequestId: 35a0c8d2-cf41-1be7-86ae-81e700afced8	Init Duration: 30000.00 ms	Duration: 6.33 ms	Billed Duration: 7 ms	Memory Size: 1536 MB	Max Memory Used: 113 MB

The docker exec is taking 95% of the time. Why is that?
I'm on Mac 11.3.1. Everything else is latest version.

Mounting the temp. directory: Note that on MacOS you may have to run TMPDIR=/private$TMPDIR docker-compose up if $TMPDIR contains a symbolic link that cannot be mounted by Docker. (See details here: https://bitbucket.org/atlassian/localstack/issues/40/getting-mounts-failed-on-docker-compose-up)

Not an issue anymore.

If you are experiencing slow performance with Lambdas in Mac OS, you could either (1) try mounting local code directly into the Lambda container, or (2) disable mounting the temporary directory into the LocalStack container in docker-compose. (See also #2515)

As you can see I disabled the volume but it's still super slow.

Using local code with Lambda
In order to mount a local folder, ensure that LAMBDA_REMOTE_DOCKER is set to false then set the S3 bucket name to __local__ and the S3 key to your local path:

awslocal lambda create-function --function-name myLambda \
    --code S3Bucket="__local__",S3Key="/my/local/lambda/folder" \
    --handler index.myHandler \
    --runtime nodejs8.10 \
    --role whatever

How do you specify S3Bucket and S3Key with serverless framework?
Couldn't find any clue.
Cf: https://forum.serverless.com/t/is-it-possible-to-specify-the-s3-key-in-serverless-yml/5702

I'm trying this now: https://github.com/cipri-p/serverless-package-location-customizer

ServerlessError: Could not locate deployment bucket

Thanks for any feedback.

@mgagliardo
Copy link
Contributor

mgagliardo commented May 19, 2021

Hello @ThomasLabstep, I was unable to reproduce this, below you can see my config and files, taken from this example.

System:

MacOS Big Sur 11.3.1 (20E241)
1,4 GHz Quad-Core Intel Core i5
16 GB 2133 MHz LPDDR3
Docker version 20.10.6, build 370c289

docker-compose.yml:

version: "3.9"

services:
  localstack:
    container_name: localstack_main
    image: localstack/localstack
    network_mode: bridge
    ports:
      - "4566:4566"
    environment:
      - DEBUG=1
      - LAMBDA_EXECUTOR=docker-reuse
      - LAMBDA_REMOTE_DOCKER=false
      - HOST_TMP_FOLDER=/tmp/localstack
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - "/tmp/localstack:/tmp/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"

serverless.yml:

service: candidate-service
frameworkVersion: '2'

plugins:
  - serverless-localstack
  - serverless-offline
  - serverless-s3-local

provider:
  name: aws
  runtime: nodejs12.x
  stage: dev
  region: us-east-1
  profile: local

functions:
  candidateSubmission:
    handler: api/candidate.submit
    memorySize: 128
    description: Submit candidate information and starts interview process.
    events:
      - http: 
          path: candidates
          method: post

custom:
  localstack:
    stages:
      - local
  bucket: test-bucket
  host: http://localhost
  edgePort: 4566
  lambda:
    mountCode: False
  s3:
    host: localhost
    directory: /tmp

candidate.js:

'use strict';

module.exports.submit = (event, context, callback) => {
  const response = {
    statusCode: 200,
    body: JSON.stringify({
      message: 'Go Serverless v1.0! Your function executed successfully!',
      input: event,
    }),
  };
  callback(null, response);
};

Commands (The usual I guess):

$ docker compose up
$ npm install --save-dev serverless-localstack serverless-offline serverless-s3-local
$ sls deploy -s local --profile local

Time curl:

$ time curl -X POST  http://localhost:4566/restapis/hpz0gbc59e/local/_user_request_/candidates
{"message":"Go Serverless v1.0! Your function executed successfully!","input":{"path":"/candidates","headers":{"Remote-Addr":"172.17.0.1","Host":"localhost:4566","User-Agent":"curl/7.64.1","Accept":"*/*","X-Forwarded-For":"172.17.0.1, localhost:4566, 127.0.0.1, localhost:4566","x-localstack-edge":"http://localhost:4566","Authorization":"AWS4-HMAC-SHA256 Credential=__internal_call__/20160623/us-east-1/apigateway/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-target, Signature=1234","x-localstack-tgt-api":"apigateway"},"multiValueHeaders":{"Remote-Addr":["172.17.0.1"],"Host":["localhost:4566"],"User-Agent":["curl/7.64.1"],"Accept":["*/*"],"X-Forwarded-For":["172.17.0.1, localhost:4566, 127.0.0.1, localhost:4566"],"x-localstack-edge":["http://localhost:4566"],"Authorization":["AWS4-HMAC-SHA256 Credential=__internal_call__/20160623/us-east-1/apigateway/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-target, Signature=1234"],"x-localstack-tgt-api":["apigateway"]},"body":"","isBase64Encoded":false,"httpMethod":"POST","queryStringParameters":{},"multiValueQueryStringParameters":{},"pathParameters":{},"resource":"/candidates","requestContext":{"path":"/local/candidates","resourcePath":"/candidates","apiId":"hpz0gbc59e","domainPrefix":"hpz0gbc59e","domainName":"hpz0gbc59e.execute-api.localhost.localstack.cloud","accountId":"000000000000","resourceId":"l5hbfz9e3z","stage":"local","identity":{"accountId":"000000000000","sourceIp":"127.0.0.1","userAgent":"curl/7.64.1"},"httpMethod":"POST","protocol":"HTTP/1.1","requestTime":"2021-05-19T19:12:48.515Z","requestTimeEpoch":1621451568515},"stageVariables":{}}}
real	0m0.652s
user	0m0.003s
sys	0m0.006s

Localstack debug:

localstack_main  | 2021-05-19T19:12:49:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-1:000000000000:function:candidate-service-local-candidateSubmission result / log output:
localstack_main  | {"statusCode":200,"body":"{\"message\":\"Go Serverless v1.0! Your function executed successfully!\",\"input\":{\"path\":\"/candidates\",\"headers\":{\"Remote-Addr\":\"172.17.0.1\",\"Host\":\"localhost:4566\",\"User-Agent\":\"curl/7.64.1\",\"Accept\":\"*/*\",\"X-Forwarded-For\":\"172.17.0.1, localhost:4566, 127.0.0.1, localhost:4566\",\"x-localstack-edge\":\"http://localhost:4566\",\"Authorization\":\"AWS4-HMAC-SHA256 Credential=__internal_call__/20160623/us-east-1/apigateway/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-target, Signature=1234\",\"x-localstack-tgt-api\":\"apigateway\"},\"multiValueHeaders\":{\"Remote-Addr\":[\"172.17.0.1\"],\"Host\":[\"localhost:4566\"],\"User-Agent\":[\"curl/7.64.1\"],\"Accept\":[\"*/*\"],\"X-Forwarded-For\":[\"172.17.0.1, localhost:4566, 127.0.0.1, localhost:4566\"],\"x-localstack-edge\":[\"http://localhost:4566\"],\"Authorization\":[\"AWS4-HMAC-SHA256 Credential=__internal_call__/20160623/us-east-1/apigateway/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-target, Signature=1234\"],\"x-localstack-tgt-api\":[\"apigateway\"]},\"body\":\"\",\"isBase64Encoded\":false,\"httpMethod\":\"POST\",\"queryStringParameters\":{},\"multiValueQueryStringParameters\":{},\"pathParameters\":{},\"resource\":\"/candidates\",\"requestContext\":{\"path\":\"/local/candidates\",\"resourcePath\":\"/candidates\",\"apiId\":\"hpz0gbc59e\",\"domainPrefix\":\"hpz0gbc59e\",\"domainName\":\"hpz0gbc59e.execute-api.localhost.localstack.cloud\",\"accountId\":\"000000000000\",\"resourceId\":\"l5hbfz9e3z\",\"stage\":\"local\",\"identity\":{\"accountId\":\"000000000000\",\"sourceIp\":\"127.0.0.1\",\"userAgent\":\"curl/7.64.1\"},\"httpMethod\":\"POST\",\"protocol\":\"HTTP/1.1\",\"requestTime\":\"2021-05-19T19:12:48.515Z\",\"requestTimeEpoch\":1621451568515},\"stageVariables\":{}}}"}
localstack_main  | > START RequestId: c8cb03c2-e90e-153c-2278-4a889470a237 Version: $LATEST
localstack_main  | > END RequestId: c8cb03c2-e90e-153c-2278-4a889470a237
localstack_main  | > REPORT RequestId: c8cb03c2-e90e-153c-2278-4a889470a237	Init Duration: 6000.00 ms	Duration: 8.02 ms	Billed Duration: 9 ms	Memory Size: 1536 MB	Max Memory Used: 41 MB

Let me know how it goes for you, both trying the second time (first is pull lambda images) and after works smoothly for me (I am running firefox, chrome, videos on youtube, among other containers.. so my computer is running some other things besides the docker compose)

@ThomasLabstep
Copy link
Author

Thank you so much.

  s3:
    host: localhost
    directory: /tmp

This piece of code did it.

real 0m1.261s

Good enough for me.

I haven't added any plugin. Are serverless-offline + serverless-s3-local really useful for something?

@mgagliardo
Copy link
Contributor

Nah, those are leftovers from other tests I ran. Glad it worked for you. Cheers mate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: response required Waiting for a response from the reporter
Projects
None yet
Development

No branches or pull requests

2 participants