How I developed, tested and automatically deployed an AWS Lambda for the first time

Updated: Feb 10

So I was just a beginner into this topic and I didn’t realize what it was needed to know to develop an AWS Lambda. I wanted to do it well so my main focus was on code, test, CI/CD, security, availability, reliability and learning to use something I’ve never used before…. too much, I know, right?…but What about infrastructure and all possible combinations of settings needed for it to work correctly?… (developer’s anxiety coming).

…I took my first cup of coffee…


I started searching for tools to speed up my work and Serverless Framework came out. In my own words, it is a cloud agnostic framework that provides you with a comprehensive, easy to use, non invasive and integrable method that allows you to create, deploy and destroy your serverless application. The only thing you need is to have an AWS account, a Serverless framework account (mine was a free one), setup Serverless CLI and start your project in your preferred language (I chose NodeJS based on AWS Lambda cold starts, my lambda would be off most of the time). Then execute.

>serverless create --template aws-nodejs --path path/to/your/app
>npm init

And voilá. You will have your application main Skeleton. Then go to the “serverless.yml” file (this will be the core of your lambda, the main piece of configuration). I’ll show you something simple but not too basic (to spice this thing up).

Let’s say we have a little market, a few items to sell and an inventory containing its basic information. We will need something to get and maybe update information of an item if we sell it. Let’s practice!


  1. Give a unique name to your lambda.

  2. I told you that it was cloud agnostic, right? So tell it what your requirements are.

  3. Don’t worry!, we will talk about this later. Just for testing purposes.

  4. Define which resources your lambda will access and the actions it will perform on them. In this case, a “Market” DynamoDB is the only resource that my lambda will access, and it only can read and update items from that table.

  5. Your lambda needs “something” through it can be accessed. In this case Serverless Framework will expose an API through API Gateway with 2 methods: The first one will get an item and the second one will update an item. The handlers will be the JS functions containing the implementation for those methods to work properly.

  6. Define the resources that you will create as if you were using Cloudformation.

By the moment, the only dependency you will need is aws-sdk, then you can save efforts on the implementation of accessing the most important AWS services through JS code. In order to assure you can work locally, you will need to install the AWS-CLI and configure your AWS credentials so you can give to Serverless, access to manage the application’s entire lifecycle.

Now is time to implement the Handler class. I’ll show you just one method but the concept is the same for the other one.

You should note that your API is returning HTTP status codes like any other api would, and you can configure the response body however you want. Besides, since Lambda is part of an event-driven architecture, it makes sense it receives an event when it is invoked, then it must be parsed to a JSON object so that the body can be processed properly. Another point to consider is that AWS services invocations are asynchronous, so “async — await” joins must be performed depending on what is expected in the Lambda behavior.

The remaining code is not shown here to keep this post less difficult to read. You can find a complete implementation in the GitHub repo at the end of this document, but with what I’ve shown and implementing the DynamoDB integration, we can deploy our Lambda (but wait for the next section), and we still don’t touch AWS, we just coded. Awesome uh?.

…I realized I deserved a second cup of coffee…


We can deploy now, but that’s not the ideal. Testing must be mandatory in every development process.

I decided to use Jest to develop my unit tests and to mock my AWS connection, I used aws-sdk-mock.

But first, let’s test our lambda locally, kind of like deploying it on our machine. To do that, we need to configure Serverless to work locally. So we need the serverless-dynamodb-local dependency to manage local Dynamo databases and the serverless-offline dependency to enable Serverless to work locally. Besides we need to use both dependencies as plugins in our “serverless.yml”. Just add at the end of the file the following lines:

You should tell serverless in which environment you want to install dyanamodb-local:

Then the local Dynamo database will need to be populated:

>serverless dynamodb install
>serverless dynamodb start --migrate

The last command will start a local DynamoDB instance available in http://localhost:8000/shell in dev stage, if you decided to install it in another stage, you should use the following command (for test stage, for example):

>serverless dynamodb migrate --stage test

Insert some items following batchWriteItem syntax (you can find some queries on the example repo at the end of this article):

You can review all inserted items executing the following command:

aws dynamodb scan — table-name Market — endpoint-url http://localhost:8000

There is one thing that I see as a disadvantage of the AWS-SDK and is that there is mandatory to specify which Dynamo connection use directly on the code implementation, but we can design some good strategies to do it properly. To keep simple this example, change manually your DynamoDB connection as follows:

Remember that in this article I don’t publish the getItem function code, you can find it in the GitHub repo at the end of this post.

Now we can start Serverless in offline mode, and invoke our lambda getItem method locally (take care of json format).

Start Serverless in offline mode if you want to execute the request using an external tool like Postman or SoapUI
>serverless offline

Or invoke the lambda directly in your terminal
>serverless invoke local --function seek --data "{\"body\":\"{\\\"itemID\\\": \\\"1\\\"}\"}"

You’ll get the following output

Warning: Do not shut down your local database, if you do it, then it will delete all data and you’ll have to populate the database again.

Our lambda is working locally! Yeiiii

Now we are ready to test it.

Note that I mock the DynamoDB connection and I set the return values just like we do it with any other mock during the test arrange. At the end, during the test tear down, that mock must be restored. We can make input data validation when the mock is called too.

¿But what about error paths? Well, it’s pretty much the same thing, just set the callback as follows:

That way, we can simulate an internal error during the execution of the DynamoDB query.

Hint: You can run code coverage, by setting this script in your “package.json”:

“scripts”: {
    “test:coverage”: “jest — coverage — collectCoverageFrom=api/**”
},Then in a new terminal run this command
>npm run test:coverage

When you have most of the unit tests, you’ll have an output like this:

¡Now we are ready to deploy!

But first… Yes, I had another cup of coffee, this time a cold brew.


So now I’m happy!, I learned a lot during the process (I still have a lot to learn) and I’m pretty sure that I’m getting closer to the solution I need.

The next step… deploy. But wait! I’m a fashionable woman, and a manual deploy, I can say with complete confidence that is out of date. So let’s setup a CI/CD pipeline to run automatic deployments based on the minimum quality requirements our lambda will need.

To do it, I’ll use GitHub Actions and to publish coverage I’ll use Codecov.

As requirements, a GitHub account in necessary of course, and a Codecove one too. Once you have your project versioned, go to “Actions” and configure a NodeJS workflow. Setup the jobs as you wish according on what steps you expect to execute.

To publish code coverage, generate a Codecov api token by activating the project repository coverage, and add the following steps:

- run: npm run test:coverage && cp coverage/ || echo "Code coverage failed"
- name: Push codeCoverage to Codecov
run: bash <(curl -s -t ${{secrets.CODECOV_TOKEN}}
     CI: true

To deploy, just add this job into the “nodejs.yml”:

  name: deploy
  runs-on: ubuntu-latest
        - 12.x
    - uses: actions/checkout@v2
    - name: 'Use Node.js ${{ matrix.node-version }}'
      uses: actions/setup-node@v1
        node-version: '${{ matrix.node-version }}'
    - run: npm install
    - run: npm run build --if-present
    - run: npm test
    - name: serverless deploy
      uses: serverless/github-action@v1.53.0
        args: deploy
    AWS_ACCESS_KEY_ID: '${{ secrets.AWS_ACCESS_KEY_ID }}'

Those environment variables must be set as GitHub secrets.

After a push is done, the workflow will automatically run and deploy our lambda through Serverless Framework. You can play with automatic and manual triggers and other Serverless commands, for example to undeploy an AWS Lambda or rollback a deploy after a failure on a quality gate.

…Our automatic deploy is done now!

So ¿What happens in AWS when a Lambda is deployed through Serverless Framework?

Well… not only the lambda is deployed, but also, the roles required to run operations on the database and the lambda itself (other roles are also created). A Cloudformation template is created too, to manage all resources. An S3 bucket with the Lambda zip is created as well. The Api Gateway is setup to expose both methods and, if the DynamoDB table doesn’t exists, it will be created too. And all those configurations, with only one command. Beautiful!

And that’s it. I encourage you to practice and read a little bit more and, if you have any questions or comments, please let me know.

Finally you can refer to the following repo for more guidance.

Thanks for reading.

52 views0 comments