-
Notifications
You must be signed in to change notification settings - Fork 293
Open
Description
I have the following serverless.yml:
# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
# docs.serverless.com
#
# Happy Coding!
service: awsTest
# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
# frameworkVersion: "=X.X.X"
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
invalidateCaches: true
dockerizePip: true
dockerImage: lambda-python3.6-with-mysql-build-deps
provider:
name: aws
runtime: python3.6
role: arn:aws:iam::443746630310:role/EMR_DefaultRole
# you can overwrite defaults here
# stage: dev
# region: us-east-1
# you can add statements to the Lambda function's IAM Role here
# iamRoleStatements:
# - Effect: "Allow"
# Action:
# - "s3:ListBucket"
# Resource: { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "ServerlessDeploymentBucket" } ] ] }
# - Effect: "Allow"
# Action:
# - "s3:PutObject"
# Resource:
# Fn::Join:
# - ""
# - - "arn:aws:s3:::"
# - "Ref" : "ServerlessDeploymentBucket"
# - "/*"
# you can define service wide environment variables here
# environment:
# variable1: value1
# you can add packaging information here
#package:
# include:
# - include-me.py
# - include-me-dir/**
# exclude:
# - exclude-me.py
# - exclude-me-dir/**
functions:
emotion-analysis:
handler: handler.emotionAnalysis
events:
- http:
path: emotionAnalysis
method: post
audio-analysis:
handler: handler.audioAnalysis
events:
- http:
path: vokaturiAnalysis
method: post
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
# events:
# - http:
# path: users/create
# method: get
# - s3: ${env:BUCKET}
# - schedule: rate(10 minutes)
# - sns: greeter-topic
# - stream: arn:aws:dynamodb:region:XXXXXX:table/foo/stream/1970-01-01T00:00:00.000
# - alexaSkill
# - iot:
# sql: "SELECT * FROM 'some_topic'"
# - cloudwatchEvent:
# event:
# source:
# - "aws.ec2"
# detail-type:
# - "EC2 Instance State-change Notification"
# detail:
# state:
# - pending
# - cloudwatchLog: '/aws/lambda/hello'
# - cognitoUserPool:
# pool: MyUserPool
# trigger: PreSignUp
# Define function environment variables here
# environment:
# variable2: value2
# you can add CloudFormation resource templates here
#resources:
# Resources:
# NewResource:
# Type: AWS::S3::Bucket
# Properties:
# BucketName: my-new-bucket
# Outputs:
# NewOutput:
# Description: "Description for the output"
# Value: "Some output value"
and the requirements.txt:
cycler==0.10.0
decorator==4.1.2
imutils==0.4.3
Keras==2.1.1
matplotlib==2.1.0
networkx==2.0
numpy==1.13.3
olefile==0.44
opencv-python==3.3.0.10
pandas==0.21.0
Pillow==4.3.0
pyparsing==2.2.0
python-dateutil==2.6.1
pytz==2017.3
PyWavelets==0.5.2
PyYAML==3.12
scikit-image==0.13.1
scikit-learn==0.19.1
scipy==1.0.0
six==1.11.0
sklearn==0.0
dlib==19.7.0
I am using this Dockerfile to compile dlib and boost:
FROM amazonlinux:latest
RUN touch /var/lib/rpm/*
RUN yum install -y yum-plugin-ovl && cd /usr/src
#RUN yum check-update
#RUN rpm --rebuilddb
RUN yum history sync
RUN yum install -y wget
RUN yum install -y sudo
RUN yum install -y sudo && sudo yum install -y yum-utils && sudo yum groupinstall -y development
RUN sudo yum install -y https://centos6.iuscommunity.org/ius-release.rpm && sudo yum install -y python36u && yum install -y python36u-pip && yum install -y python36u-devel
#RUN yum install -y grub2
RUN ln -s /usr/include/python3.6m /usr/include/python3.6
RUN wget --no-check-certificate -P /tmp http://flydata-rpm.s3-website-us-east-1.amazonaws.com/patchelf-0.8.tar.gz
RUN tar xvf /tmp/patchelf-0.8.tar.gz -C /tmp
RUN cd /tmp/patchelf-0.8 && ./configure && make && sudo make install
RUN yum install -y blas-devel boost-devel lapack-devel gcc-c++ cmake git
RUN git clone https://github.com/davisking/dlib.git
RUN cd dlib/python_examples/
RUN mkdir build && cd build
RUN cmake -DPYTHON_INCLUDE_DIR=$(python3.6 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") -DPYTHON_LIBRARY=$(python3.6 -c "import distutils.sysconfig as sysconfig; print(sysconfig.get_config_var('LIBDIR'))") -DUSE_SSE4_INSTRUCTIONS:BOOL=ON dlib/tools/python
RUN sed -i 's/\/\/all/all/' Makefile && sed -i 's/\/\/preinstall/preinstall/' Makefile
RUN cmake --build . --config Release --target install
RUN cd ..
RUN mkdir ~/dlib
RUN cp dlib.so ~/dlib/__init__.so
RUN cp /usr/lib64/libboost_python-mt.so.1.53.0 ~/dlib/
RUN touch ~/dlib/__init__.py
RUN patchelf --set-rpath '$ORIGIN' ~/dlib/__init__.so
When I run serverless deploy, I get the following error:
Error --------------------------------------------------
Error: Could not open requirements file: [Errno 2] No such file or directory: '.serverless/requirements.txt'
at ServerlessPythonRequirements.installRequirements (/Users/manavdutta1/Downloads/awsTest/node_modules/serverless-python-requirements/lib/pip.js:80:11)
From previous event:
at PluginManager.invoke (/usr/local/lib/node_modules/serverless/lib/classes/PluginManager.js:366:22)
at PluginManager.spawn (/usr/local/lib/node_modules/serverless/lib/classes/PluginManager.js:384:17)
at Deploy.BbPromise.bind.then.then (/usr/local/lib/node_modules/serverless/lib/plugins/deploy/deploy.js:120:50)
From previous event:
at Object.before:deploy:deploy [as hook] (/usr/local/lib/node_modules/serverless/lib/plugins/deploy/deploy.js:110:10)
at BbPromise.reduce (/usr/local/lib/node_modules/serverless/lib/classes/PluginManager.js:366:55)
From previous event:
at PluginManager.invoke (/usr/local/lib/node_modules/serverless/lib/classes/PluginManager.js:366:22)
at PluginManager.run (/usr/local/lib/node_modules/serverless/lib/classes/PluginManager.js:397:17)
at variables.populateService.then (/usr/local/lib/node_modules/serverless/lib/Serverless.js:104:33)
at runCallback (timers.js:785:20)
at tryOnImmediate (timers.js:747:5)
at processImmediate [as _immediateCallback] (timers.js:718:5)
From previous event:
at Serverless.run (/usr/local/lib/node_modules/serverless/lib/Serverless.js:91:74)
at serverless.init.then (/usr/local/lib/node_modules/serverless/bin/serverless:42:50)
at <anonymous>
I have no idea why this is happening. I have the requirements.txt under .serverless in my local directory and it looks fine. Does anyone know why this is happening?
Activity
dschep commentedon Dec 7, 2017
You should keep your
requirements.txt
in the root of your service, the plugin creates the file at.serverless/requirements.txt
.heri16 commentedon Dec 14, 2017
Are you running on windows @manav95? I noticed you enabled dockerizePip. See Issue #105
manav95 commentedon Dec 21, 2017
im using mac os x
heri16 commentedon Dec 21, 2017
JFox commentedon Feb 23, 2018
I'm having the exact same issue as @manav95. I'm using default docker image on debian jeesie
dschep commentedon Feb 23, 2018
Just thought of something it might be.. Add this to your
Dockerfile
:thesmith commentedon Mar 7, 2018
I'm hitting this too. Runs perfectly locally (OSX) but when using Codeship and the following Dockerfile:
Then when I run
serverless deploy
I get the same error.Sitin commentedon Mar 14, 2018
I have the same problem.
Also when I am trying to pull image inside the
dicker:dind
image:I got:
I am running via
jet
and haveadd_docker: true
.avli commentedon Mar 15, 2018
I get this error too when I try using the plugin on CircleCI to automate deployment. I don't use any custom Docker images but circleci/python:3.6.4. The plugin configuration I use is as follows:
And yes, everything runs perfectly on my local machine which runs macOS.
thesmith commentedon Mar 15, 2018
I've been playing with this a bit more and there's definitely something about running the pip install through docker, from within another docker.
I guess one way to get around this would be to run the pip install command without docker, given we're already within a docker container - as long as the host docker is the right kind to build the package for lambda.
If there was an extended version of https://github.com/lambci/docker-lambda/tree/master/python3.6 that we could use to run
serverless deploy
from then we could setdockerizePip: false
.avli commentedon Mar 15, 2018
@thesmith Yes, this is the current workaround I use. Thank you for posting it – can be useful for other users who are hitting this issue.
thesmith commentedon Mar 15, 2018
So this Dockerfile seems to be working, obviously dockerizePip has to be false:
Annoyingly this means you have to flip dockerizePip between deploying via CI and locally.
dschep commentedon Mar 15, 2018
Ah. yeah I'll ahve to check docker-in-docker out at some point.
Re this @thesmith:
You an do something like:
(this assumes you have a
CI
env var set totrue
in CI (CircleCI does this automatically, not sure how standard it is, but it'd be easy to add the var or adapt this technique to your CI provider)8 remaining items
brettdh commentedon Nov 1, 2018
What does this mean? Is the folder cleared out before every deploy?
I can just as easily (I think) put it in my project's root dir alongside
.serverless/
instead, but I'm not sure I understand what you think will happen if I put it inside.serverless/
as I've done.So far so good :) But like I said, I'm curious as to what the danger might be.
I'll try to put together a minimal repro project on gitlab.com when I get a chance.
AndrewFarley commentedon Nov 1, 2018
The serverless framework deletes that folder and recreates it every time you deploy, defeating the purpose of cache completely is what I mean.
brettdh commentedon Nov 1, 2018
Yep, that'd be a good reason to move it out 😅 Thanks for the tip!
namank5x commentedon Feb 22, 2019
Update:
I've tried to use this method on gitlab ci while deploying. it works when it tries to use the cache directory but many times it doesn't use the cache directory in which case it fails. maybe if we could add a parameter to always use cache directory it could work ?
JustinTArthur commentedon Apr 9, 2019
I think the fix would be an option to use
docker cp
instead of volumes or binds fordockerizePip
. These CI systems generally employ a remote docker daemon as far as the main build can see.chubzor commentedon Nov 11, 2019
@thesmith or @avli
I've got the /var/task problem for a project I'm working on right now. I'm trying to use circleCI and serverless-python-requirements.
Are you setting dockerizePip to false when deploying from the CI or are you using dockerizePip true?
If you are using dockerizePip true did you place in your own docker image for it and then added /var/task folder?
When I try dockerizePip false I run into the lambda limit error, which is not good. Even when I use slimming.
Any clarification would be great here.
alexcallow commentedon Nov 15, 2019
Getting the same issue to @chubzor, any news on a fix.
chubzor commentedon Nov 25, 2019
@alexcallow I ended up using this:
custom:
pythonRequirements:
layer: true
slim: true
slimPatterns:
- "**/test_*.py"
strip: false
We abandoned deploying via local dev machines, and this worked for us when deploying with circleCI.
Mind you we have numpy, pandas, scikit-learn in requirements.
I think we are hanging by inside of some size limit so this is not sustainable, but could be helpful for you.
adam0x01 commentedon Dec 12, 2019
If you have
pyproject.toml
in your project but you don't usepoetry
, please remember to setusePoetry: false
. The config will beRelated code:
https://github.com/UnitedIncome/serverless-python-requirements/blob/64e20db2a4acbf95a3d9391797b0c12544234a0c/index.js#L41
https://github.com/UnitedIncome/serverless-python-requirements/blob/master/lib/pip.js#L65
bxm156 commentedon Mar 19, 2020
I'm encounter this as well when trying to use CircleCi, my executor is
which I belive means im doing docker-in-docker.
It seems this command
Running docker run --rm -v /home/circleci/.cache/serverless-python-requirements/2b94ea26f9dceaadc347670525eaa71ffd73487d3460b8428f6a406f834f65af_slspyc\:/var/task\:z -v /home/circleci/.cache/serverless-python-requirements/downloadCacheslspyc\:/var/useDownloadCache\:z sls-py-reqs-custom /bin/sh -c 'chown -R 0\\:0 /var/useDownloadCache && python3.6 -m pip install -t /var/task/ -r /var/task/requirements.txt --cache-dir /var/useDownloadCache && chown -R 3434\\:3434 /var/task && cp /usr/lib64/libpq.so.5 /var/task/ && chown -R 3434\\:3434 /var/useDownloadCache'..
is mounting
/home/circleci/.cache/serverless-python-requirements/2b94ea26f9dceaadc347670525eaa71ffd73487d3460b8428f6a406f834f65af_slspyc\
to/var/task\:
but I found this in the CircleCI documentation:https://support.circleci.com/hc/en-us/articles/360007324514-How-can-I-mount-volumes-to-docker-containers-
"It's not possible to use volume mounting with the docker executor, but using the machine executor it's possible to mount local directories to your running Docker containers. "
I switched from docker-in-docker:
to Machine:
And it ran successfully.
Note: CircleCi warns that machine executors may become a premium feature in the future.
AndrewFarley commentedon Mar 19, 2020
@bxm156 please try my docker in docker support and report back. See: #484
scarfaace commentedon Nov 24, 2020
This thing! pyproject.toml...
santtul commentedon Apr 7, 2022
I bumped to this issue as well with macOS and minikube as docker platform. The problem was that the
docker run
command executed by the plugin includes volume mounts like mentioned in other comments (those-v
switches) and volume mounts don't work out-of-the-box with minikube. I had to run minikube mount command for the folder like this (leave the command running):And after that the plugin was able to generate the requirements and the build worked.
colinatwork commentedon Jul 3, 2025
Ran into this recently using GitLab CICD. What fixed it for me was changing
dockerizePip
fromtrue
tonon-linux
. So now our serverless.yml looks like this: