top of page

How I developed, tested and automatically deployed an AWS Lambda for the first time

Updated: Jul 28, 2022

So I was just a beginner in this topic and I didn’t realize what I needed to know to develop an AWS Lambda. I wanted to do it well so my main focus was on code, test, CI/CD, security, availability, reliability and learning to use something I’ve never used before…. it's a lot, I know, right?…but what about infrastructure and all possible combinations of settings needed for it to work correctly?… (developer anxiety coming).

…I took my first cup of coffee…


I started searching for tools to speed up my work and Serverless Framework came out. In my own words, it is a cloud agnostic framework that provides you with a comprehensive, easy-to-use, non-invasive and integrable method that allows you to create, deploy and destroy your serverless application. The only thing you need to have is an AWS account, a Serverless Framework account (mine was a free one), setup Serverless CLI, and start your project in your preferred language (I chose NodeJS based on AWS Lambda cold starts, my lambda would be off most of the time). Then execute:

>serverless create --template aws-nodejs --path path/to/your/app
>npm init

And voilá. You have your application's main skeleton. Then go to the “serverless.yml” file (this will be the core of your lambda, the main piece of configuration). I’ll show you something simple but not too basic (to spice this thing up).

Let’s say we have a little market, a few items to sell and an inventory containing its basic information. We will need something to get and maybe update information of an item if we sell it. Let’s practice!


  1. Give a unique name to your lambda.

  2. I told you that it was cloud agnostic, right? So tell it what your requirements are.

  3. Don’t worry! We will talk about this later. Just for testing purposes.

  4. Define which resources your lambda will access and the actions it will perform on them. In this case, a “Market”. DynamoDB is the only resource that my lambda will access, and it only can read and update items from that table.

  5. Your lambda needs “something” through which it can be accessed. In this case, Serverless Framework will expose an API through API Gateway with 2 methods: The first one will get an item and the second one will update an item. The handlers will be the JS functions containing the implementation for those methods to work properly.

  6. Define the resources that you will create as if you were using CloudFormation.

At the moment, the only dependency you will need is aws-sdk, then you can save efforts on the implementation of accessing the most important AWS services through JS code. In order to assure you can work locally, you will need to install the AWS-CLI and configure your AWS credentials so you can give Serverless access to manage the application’s entire lifecycle.

Now is time to implement the Handler class. I’ll show you just one method but the concept is the same for the other one.

You should note that your API is returning HTTP status codes like any other API would, and you can configure the response body however you want. Besides, since Lambda is part of an event-driven architecture, it makes sense that it receives an event when it is invoked, then it must be parsed to a JSON object so that the body can be processed properly. Another point to consider is that AWS service invocations are asynchronous, so “async — await” joins must be performed depending on what is expected in the Lambda behavior.

The remaining code is not shown here to keep this post less difficult to read. You can find a complete implementation in the GitHub repo at the end of this document, but with what I’ve shown and implemented with the DynamoDB integration, we can deploy our Lambda (but wait for the next section), and we still don’t touch AWS, we just coded. Awesome uh?.

…I realized I deserved a second cup of coffee…


We can deploy now, but that’s not ideal. Testing must be mandatory in every development process.

I decided to use Jest to develop my unit tests and to mock my AWS connection, I used aws-sdk-mock.

But first, let’s test our lambda locally, kind of like deploying it on our machine. To do that, we need to configure Serverless to work locally. So we need the serverless-dynamodb-local dependency to manage local Dynamo databases and the serverless-offline dependency to enable Serverless to work locally. Besides, we need to use both dependencies as plugins in our “serverless.yml”. Just add the following lines to the end of the file:

You should tell serverless in which environment you want to install dyanamodb-local:

Then the local Dynamo database will need to be populated:

>serverless dynamodb install
>serverless dynamodb start --migrate

The last command will start a local DynamoDB instance available in http://localhost:8000/shell in dev stage, if you decided to install it in another stage, you should use the following command (for test stage, for example):

>serverless dynamodb migrate --stage test

Insert some items following batchWriteItem syntax (you can find some queries on the example repo at the end of this article):

You can review all inserted items executing the following command:

aws dynamodb scan — table-name Market — endpoint-url http://localhost:8000

There is one thing that I see as a disadvantage of the AWS-SDK which is that it is mandatory to specify which Dynamo connection to use directly in the code implementation, but we can design some good strategies to do it properly. To keep this example, manually change your DynamoDB connection as follows:

Remember that in this article I don’t publish the getItem function code, you can find that in the GitHub repo at the end of this post.

Now we can start Serverless in offline mode, and invoke our lambda getItem method locally (be careful with the json format).

Start Serverless in offline mode if you want to execute the request using an external tool like Postman or SoapUI
>serverless offline

Or invoke the lambda directly in your terminal
>serverless invoke local --function seek --data "{\"body\":\"{\\\"itemID\\\": \\\"1\\\"}\"}"

You’ll get the following output

Warning: Do not shut down your local database, if you do it, then it will delete all data and you’ll have to populate the database again.

Our lambda is working locally! Yeiiii

Now we are ready to test it.

Note that I mock the DynamoDB connection and I set the return values just like we do with any other mock during the test arrangement. In the end, during the test tear down, that mock must be restored. We can make input data validation when the mock is called too.

¿But what about error paths? Well, it’s pretty much the same thing, just set the callback as follows:

That way, we can simulate an internal error during the execution of the DynamoDB query.

Hint: You can run code coverage, by setting this script in your “package.json”:

“scripts”: {
    “test:coverage”: “jest — coverage — collectCoverageFrom=api/**”
},Then in a new terminal run this command
>npm run test:coverage

When you have most of the unit tests, you’ll have an output like this:

¡Now we are ready to deploy!

But first… Yes, I had another cup of coffee, this time a cold brew.


So now I’m happy! I learned a lot through this process (I still have a lot to learn) and I’m pretty sure that I’m getting closer to the solution I need.

The next step… deploy. But wait! I’m a fashionable woman, and a manual deploy, I can say with complete confidence, is out of date. So let’s setup a CI/CD pipeline to run automatic deployments based on the minimum quality requirements our lambda will need.

To do that, I’ll use GitHub Actions, and to publish coverage I’ll use Codecov.

As for requirements, a GitHub account is necessary of course, and a Codecove one too. Once you have your project versioned, go to “Actions” and configure a NodeJS workflow. Setup the jobs as you wish according to what steps you expect to execute.

To publish code coverage, generate a Codecov API token by activating project repository coverage, and adding the following steps:

- run: npm run test:coverage && cp coverage/ || echo "Code coverage failed"
- name: Push codeCoverage to Codecov
run: bash <(curl -s -t ${{secrets.CODECOV_TOKEN}}
     CI: true

To deploy, just add this job into the “nodejs.yml”:

  name: deploy
  runs-on: ubuntu-latest
        - 12.x
    - uses: actions/checkout@v2
    - name: 'Use Node.js ${{ matrix.node-version }}'
      uses: actions/setup-node@v1
        node-version: '${{ matrix.node-version }}'
    - run: npm install
    - run: npm run build --if-present
    - run: npm test
    - name: serverless deploy
      uses: serverless/github-action@v1.53.0
        args: deploy
    AWS_ACCESS_KEY_ID: '${{ secrets.AWS_ACCESS_KEY_ID }}'

Those environment variables must be set as GitHub secrets.

After a push is done, the workflow will automatically run and deploy our lambda through Serverless Framework. You can play with automatic and manual triggers and other Serverless commands, for example to undeploy an AWS Lambda or rollback a deployment after a failure within the quality gate.

…Our automatic deployment is done now!

So ¿What happens in AWS when a Lambda is deployed through Serverless Framework?

Well… not only is the lambda deployed but so too are the roles required to run operations on the database and the lambda itself (other roles are also created). A CloudFormation template is also created to manage all resources. An S3 bucket with the Lambda zip is created as well. The API Gateway is setup to expose both methods and, if the DynamoDB table doesn’t exists, it will be created too. And all those configurations, with only one command. Beautiful!

And that’s it. I encourage you to practice and read a little bit more, and, if you have any questions or comments, please let me know.

Finally you can refer to the following repo for more guidance.

Thanks for reading.


Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page