ServerlessStack v1.2.2
ServerlessStack v1.2.2
Introduction
Conclusion
Wrapping up
Giving back
Older Versions
Staying up to date
Extra Credit
Backend
Frontend
Tools
So you might be a backend developer who would like to learn more about the frontend portion
of building serverless apps or a frontend developer that would like to learn more about the
backend; this guide should have you covered.
We are also catering this solely towards JavaScript developers for now. We might target other
languages and environments in the future. But we think this is a good starting point because it
can be really beneficial as a full-stack developer to use a single language (JavaScript) and
environment (Node.js) to build your entire application.
On a personal note, the serverless approach has been a giant revelation for us and we wanted
to create a resource where we could share what we’ve learnt. You can read more about us here
(/about.html).
We’ll be using the AWS Platform to build it. We might expand further and cover a few other
platforms but we figured the AWS Platform would be a good place to start. We’ll be using the
following set of technologies to build our serverless application.
While the list above might look daunting, we are trying to ensure that upon completing the
guide you’ll be ready to build real-world, secure, and fully-functional web apps. And don’t
worry we’ll be around to help!
We think this will give you a good foundation on building full-stack serverless applications. If
there are any other concepts or technologies you’d like us to cover, feel free to let us know via
email (mailto:[email protected]).
1. We are charged for keeping the server up even when we are not serving out any requests.
2. We are responsible for uptime and maintenance of the server and all its resources.
3. We are also responsible for applying the appropriate security updates to the server.
4. As our usage scales we need to manage scaling up our server as well. And as a result
manage scaling it down when we don’t have as much usage.
For smaller companies and individual developers this can be a lot to handle. This ends up
distracting from the more important job that we have; building and maintaining the actual
application. At larger organisations this is handled by the infrastructure team and usually it is
not the responsibility of the individual developer. However, the processes necessary to support
this can end up slowing down development times. As you cannot just go ahead and build your
application without working with the infrastructure team to help you get up and running.
As developers we’ve been looking for a solution to these problems and this is where serverless
comes in. Serverless allows us to build applications where we simply hand the cloud provider
(AWS, Azure, or Google Cloud) our code and it runs it for us. It also allocates the appropriate
amount of resources to respond to the usage. On our end we only get charged for the time it
took our code to execute and the resources it consumed. If we are undergoing a spike of usage,
the cloud provider simply creates more instances of our code to respond to the requests.
Additionally, our code runs in a secured environment where the cloud provider takes care of
keeping the server up to date and secure.
AWS Lambda
In serverless applications we are not responsible for handling the requests that come in to our
server. Instead the cloud provider handles the requests and sends us an object that contains the
relevant info and asks us how we want to respond to it. The request is treated as an event and
our code is simply a function that takes this as the input. As a result we are writing functions
that are meant to respond to these events. So when a user makes a request, the cloud provider
creates a container and runs our function inside it. If there are two concurrent requests, then
two separate containers are created to respond to the requests.
Here myHandler is the name of our Lambda function. The event object contains all the
information about the event that triggered this Lambda. In our case it’ll be information about
the HTTP request. The context object contains info about the runtime our Lambda function
is executing in. After we do all the work inside our Lambda function, we simply call the
callback function with the results (or the error) and AWS will respond to the HTTP request
with it.
While this example is in JavaScript (or Node.js), AWS Lambda supports Python, Java, and C# as
well. Lambda functions are charged for every 100ms that it uses and as mentioned above they
automatically scale to respond to the usage. The Lambda runtime also comes with 512MB of
ephemeral disk space and up to 1536MB of memory.
Next, let’s take a deeper look into the advantages of serverless including the cost of running our
demo app.
1. Low maintenance
2. Low cost
3. Easy to scale
The biggest benefit by far is that you only need to worry about your code and nothing else. And
the low maintenance is a result of not having any servers to manage. You don’t need to actively
ensure that your server is running properly or that you have the right security updates on it.
You deal with your own application code and nothing else.
The main reason it’s cheaper to run serverless applications is that you are effectively only
paying per request. So when your application is not being used, you are not being charged for it.
Let’s do a quick breakdown of what it would cost for us to run our note taking application. We’ll
assume that we have 1000 daily active users making 20 requests per day to our API and storing
around 10MB of files on S3. Here is a very rough calculation of our costs.
Total $6.10
[1] Cognito is free for < 50K MAUs and $0.00550/MAU onwards.
[2] Lambda is free for < 1M requests and 400000GB-secs of compute.
[3] DynamoDB gives 25GB of free storage.
[4] S3 gives 1GB of free transfer.
So that comes out to $6.10 per month. Additionally, a .com domain would cost us $12 per year,
making that the biggest up front cost for us. But just keep in mind that these are very rough
estimates. Real-world usage patterns are going to be very different. However, these rates
should give you a sense of how the cost of running a serverless application is calculated.
Finally, the ease of scaling is thanks in part to DynamoDB which gives us near infinite scale and
Lambda that simply scales up to meet the demand. And of course our frontend is a simple static
single page app that is almost guaranteed to always respond instantly thanks to CloudFront.
Great! now that you are convinced on why you should build serverless apps; let’s get started.
Next let’s configure your account so it’s ready to be used for the rest of our guide.
In this chapter, we are going to create a new IAM user for a couple of the AWS related tools we
are going to be using later.
Create User
First, log in to your AWS Console (https://console.aws.amazon.com) and select IAM from the
list of services.
Select Users.
This account will be used by our AWS CLI (https://aws.amazon.com/cli/) and Serverless
Framework (https://serverless.com). They’ll be connecting to the AWS API directly and will not
be using the Management Console.
Select Attach existing policies directly.
Search for AdministratorAccess and select the policy, then select Next: Review.
We can provide a more fine-grained policy here and we cover this later in the Customize the
Serverless IAM Policy (/chapters/customize-the-serverless-iam-policy.html) chapter. But for
now, let’s continue with this.
The concept of IAM pops up very frequently when working with AWS services. So it is worth
taking a better look at what IAM is and how it can help us secure our serverless setup.
AWS Identity and Access Management (IAM) is a web service that helps you securely control
access to AWS resources for your users. You use IAM to control who can use your AWS
resources (authentication) and what resources they can use and in what ways (authorization).
The first thing to notice here is that IAM is a service just like all the other services that AWS has.
But in some ways it helps bring them all together in a secure way. IAM is made up of a few
different parts, so let’s start by looking at the first and most basic one.
An IAM user consists of a name, a password to sign into the AWS Management Console, and up
to two access keys that can be used with the API or CLI.
By default, users can’t access anything in your account. You grant permissions to a user by
creating a policy and attaching the policy to the user. You can grant one or more of these
policies to restrict what the user can and cannot access.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
}
And here is a policy that grants more granular access, only allowing retrieval of files prefixed by
the string Bobs- in the bucket called Hello-bucket .
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": ["s3:GetObject"],
"Resource": "arn:aws:s3:::Hello-bucket/*",
"Condition": {"StringEquals": {"s3:prefix": "Bobs-"}}
}
We are using S3 resources in the above examples. But a policy looks similar for any of the AWS
services. It just depends on the resource ARN for Resource property. An ARN is an identifier
for a resource in AWS and we’ll look at it in more detail in the next chapter. We also add the
corresponding service actions and condition context keys in Action and Condition
property. You can find all the available AWS Service actions and condition context keys for use
in IAM Policies here (http://docs.aws.amazon.com/IAM/latest/UserGuide/list_s3.html). Aside
from attaching a policy to a user, you can attach them to a role or a group.
An IAM role is very similar to a user, in that it is an identity with permission policies that
determine what the identity can and cannot do in AWS. However, a role does not have any
credentials (password or access keys) associated with it. Instead of being uniquely associated
with one person, a role can be taken on by anyone who needs it. In this case, the Lambda
function will be assigned with a role to temporarily take on the permission.
Roles can be applied to users as well. In this case, the user is taking on the policy set for the IAM
role. This is useful for cases where a user is wearing multiple “hats” in the organization. Roles
make this easy since you only need to create these roles once and they can be re-used for
anybody else that wants to take it on.
You can also have a role tied to the ARN of a user from a different organization. This allows the
external user to assume that role as a part of your organization. This is typically used when you
have a third party service that is acting on your AWS Organization. You’ll be asked to create a
Cross-Account IAM Role and add the external user as a Trust Relationship. The Trust
Relationship is telling AWS that the specified external user can assume this role.
What is an IAM Group
An IAM group is simply a collection of IAM users. You can use groups to specify permissions for
a collection of users, which can make those permissions easier to manage for those users. For
example, you could have a group called Admins and give that group the types of permissions
that administrators typically need. Any user in that group automatically has the permissions
that are assigned to the group. If a new user joins your organization and should have
administrator privileges, you can assign the appropriate permissions by adding the user to that
group. Similarly, if a person changes jobs in your organization, instead of editing that user’s
permissions, you can remove him or her from the old groups and add him or her to the
appropriate new groups.
This should give you a quick idea of IAM and some of its concepts. We will be referring to a few
of these in the coming chapters. Next let’s quickly look at another AWS concept; the ARN.
Amazon Resource Names (ARNs) uniquely identify AWS resources. We require an ARN
when you need to specify a resource unambiguously across all of AWS, such as in IAM policies,
Amazon Relational Database Service (Amazon RDS) tags, and API calls.
ARN is really just a globally unique identifier for an individual AWS resource. It takes one of the
following formats.
arn:partition:service:region:account-id:resource
arn:partition:service:region:account-id:resourcetype/resource
arn:partition:service:region:account-id:resourcetype:resource
Let’s look at some examples of ARN. Note the different formats used.
ARN is used to reference a specific resource when you orchestrate a system involving
multiple AWS resources. For example, you have an API Gateway listening for RESTful APIs
and invoking the corresponding Lambda function based on the API path and request
method. The routing looks like the following.
2. IAM Policy
We had looked at this in detail in the last chapter but here is an example of a policy
definition.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": ["s3:GetObject"],
"Resource": "arn:aws:s3:::Hello-bucket/*"
}
ARN is used to define which resource (S3 bucket in this case) the access is granted for. The
wildcard * character is used here to match all resources inside the Hello-bucket.
Next let’s configure our AWS CLI. We’ll be using the info from the IAM user account we created
previously.
Now using Pip you can install the AWS CLI (on Linux, macOS, or Unix) by running:
If you are having some problems installing the AWS CLI or need Windows install instructions,
refer to the complete install instructions
(http://docs.aws.amazon.com/cli/latest/userguide/installing.html).
Simply run the following with your Secret Key ID and your Access Key.
$ aws configure
You can leave the Default region name and Default output format the way they are.
Next let’s get started with setting up our backend.
About DynamoDB
Amazon DynamoDB is a fully managed NoSQL database that provides fast and predictable
performance with seamless scalability. Similar to other databases, DynamoDB stores data in
tables. Each table contains multiple items, and each item is composed of one or more attributes.
Create Table
First, log in to your AWS Console (https://console.aws.amazon.com) and select DynamoDB
from the list of services.
Select Create table.
Enter the Table name and Primary key info as shown below. Just make sure that userId and
noteId are in camel case.
Each DynamoDB table has a primary key, which cannot be changed once set. The primary key
uniquely identifies each item in the table, so that no two items can have the same key.
DynamoDB supports two different kinds of primary keys:
Partition key
Partition key and sort key (composite)
We are going to use the composite primary key which gives us additional flexibility when
querying the data. For example, if you provide only the value for userId , DynamoDB would
retrieve all of the notes by that user. Or you could provide a value for userId and a value for
noteId , to retrieve a particular note.
To get a further understanding on how indexes work in DynamoDB, you can read more here
DynamoDB Core Components
(http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreCo
mponents.html).
If you see the following message, deselect Use default settings.
Scroll to the bottom, ensure that New role: DynamoDBAutoscaleRole is selected, and select
Create.
Otherwise, simply ensure that Use default settings is checked, then select Create.
Note that the default setting provisions 5 reads and 5 writes. When you create a table, you
specify how much provisioned throughput capacity you want to reserve for reads and writes.
DynamoDB will reserve the necessary resources to meet your throughput needs while
ensuring consistent, low-latency performance. One read capacity unit can read up to 8 KB per
second and one write capacity unit can write up to 1 KB per second. You can change your
provisioned throughput settings, increasing or decreasing capacity as needed.
The notes table has now been created. If you find yourself stuck with the Table is being
created message; refresh the page manually.
Next we’ll set up an S3 bucket to handle file uploads.
In this chapter, we are going to create an S3 bucket which will be used to store user uploaded
files from our notes app.
Create Bucket
First, log in to your AWS Console (https://console.aws.amazon.com) and select S3 from the list
of services.
Select Create Bucket.
Pick a name of the bucket and select a region. Then select Create.
Bucket names are globally unique, which means you cannot pick the same name as this
tutorial.
Region is the physical geographical region where the files are stored. We will use US East
(N. Virginia) for this guide.
Step through the next steps and leave the defaults by clicking Next, and then click Create
Bucket on the last step.
Enable CORS
In the notes app we’ll be building, users will be uploading files to the bucket we just created.
And since our app will be served through our custom domain, it’ll be communicating across
domains while it does the uploads. By default, S3 does not allow its resources to be accessed
from a different domain. However, cross-origin resource sharing (CORS) defines a way for
client web applications that are loaded in one domain to interact with resources in a different
domain. Let’s enable CORS for our S3 bucket.
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Now that our S3 bucket is ready, let’s get set up to handle user authentication.
Amazon Cognito User Pool makes it easy for developers to add sign-up and sign-in functionality
to web and mobile applications. It serves as your own identity provider to maintain a user
directory. It supports user registration and sign-in, as well as provisioning identity tokens for
signed-in users.
In this chapter, we are going to create a User Pool for our notes app.
And select Email address or phone numbers and Allow email addresses. This is telling Cognito
User Pool that we want our users to be able to sign up and login with their email as their
username.
Scroll down and select Next step.
Hit Review in the side panel and make sure that the Username attributes is set to email.
Generate client secret: user pool apps with a client secret are not supported by JavaScript
SDK. We need to un-select the option.
Enable sign-in API for server-based authentication: required by AWS CLI when managing
the pool users via command line interface. We will be creating a test user through
command line interface in the next chapter.
Now that the app client is created. Take a note of the App client id which will be required in the
later chapters.
Now our Cognito User Pool is ready. It will maintain a user directory for our notes app. It will
also be used to authenticate access to our API. Next let’s set up a test user within the pool.
Create User
First, we will use AWS CLI to sign up a user with their email and password.
Now, the user is created in Cognito User Pool. However, before the user can authenticate with
the User Pool, the account needs to be verified. Let’s quickly verify the user using an
administrator command.
Now our test user is ready. Next, let’s set up the Serverless Framework to create our backend
APIs.
In this chapter, we are going to set up the Serverless Framework on our local development
environment.
Install Serverless
Create a directory for our API backend.
$ mkdir notes-app-api
$ cd notes-app-api
The above command needs NPM (https://www.npmjs.com), a package manager for JavaScript.
Follow this (https://docs.npmjs.com/getting-started/installing-node) if you need help installing
NPM.
Now the directory should contain 2 files, namely handler.js and serverless.yml.
$ ls
handler.js serverless.yml
handler.js file contains actual code for the services/functions that will be deployed to AWS
Lambda.
serverless.yml file contains the configuration on what AWS services Serverless will
provision and how to configure them.
$ npm init -y
This creates a new Node.js project for you. This will help us manage any dependencies our
project might have.
Now the directory should contain three files and one directory.
$ ls
handler.js node_modules package.json serverless.yml
Next, we are going to set up a standard JavaScript environment for us by adding support for
ES6.
In this chapter, we are going to enable ES6/ES7 for AWS Lambda using the Serverless
Framework. We will do this by setting up Babel (https://babeljs.io) and Webpack
(https://webpack.github.io) to transpile and package our project. If you would like to code with
AWS Lambda’s default JavaScript version, you can skip this chapter. But you will not be able to
directly use the sample code in the later chapters, as they are written in ES6 syntax.
Most of the above packages are only needed while we are building our project and they won’t
be deployed to our Lambda functions. We are using the serverless-webpack plugin to help
trigger the Webpack build when we run our Serverless commands. The webpack-node-
externals is necessary because we do not want Webpack to bundle our aws-sdk module,
since it is not compatible.
module.exports = {
entry: slsw.lib.entries,
target: "node",
// Since 'aws-sdk' is not compatible with webpack,
// we exclude all node dependencies
externals: [nodeExternals()],
// Run babel on all .js files and skip those in node_modules
module: {
rules: [
{
test: /\.js$/,
loader: "babel-loader",
include: __dirname,
exclude: /node_modules/
}
]
}
};
This is the configuration Webpack will use to package our app. The main part of this config is
the entry attribute that we are automatically generating using the slsw.lib.entries
that is a part of the serverless-webpack plugin. This automatically picks up all our handler
functions and packages them (we expand on this config at the end of our guide
(/chapters/serverless-nodejs-starter.html) to make it a bit easier to use).
Next create a file called .babelrc in the root with the following.
{
"plugins": ["transform-runtime"],
"presets": ["es2015", "stage-3"]
}
The presets are telling Babel the type of JavaScript we are going to be using.
service: notes-app-api
provider:
name: aws
runtime: nodejs6.10
stage: prod
region: us-east-1
Create a new file called create.js in our project root with the following.
const params = {
TableName: "notes",
// 'Item' contains the attributes of the item to be created
// - 'userId': user identities are federated through the
// Cognito Identity Pool, we will use the identity id
// as the user id of the authenticated user
// - 'noteId': a unique uuid
// - 'content': parsed from request body
// - 'attachment': parsed from request body
// - 'createdAt': current Unix timestamp
Item: {
userId: event.requestContext.identity.cognitoIdentityId,
noteId: uuid.v1(),
content: data.content,
attachment: data.attachment,
createdAt: new Date().getTime()
}
};
We are setting the AWS JS SDK to use the region us-east-1 while connecting to
DynamoDB.
Parse the input from the event.body . This represents the HTTP request parameters.
The userId is a Federated Identity id that comes in as a part of the request. This is set
after our user has been authenticated via the User Pool. We are going to expand more on
this in the coming chapters when we set up our Cognito Identity Pool.
Make a call to DynamoDB to put a new object with a generated noteId and the current
date as the createdAt .
Upon success, return the newly create note object with the HTTP status code 200 and
response headers to enable CORS (Cross-Origin Resource Sharing).
And if the DynamoDB call fails then return an error with the HTTP status code 500 .
service: notes-app-api
provider:
name: aws
runtime: nodejs6.10
stage: prod
region: us-east-1
functions:
# Defines an HTTP API endpoint that calls the main function in
create.js
# - path: url path is /notes
# - method: POST request
# - cors: enabled CORS (Cross-Origin Resource Sharing) for browser
cross
# domain api call
# - authorizer: authenticate using the AWS IAM role
create:
handler: create.main
events:
- http:
path: notes
method: post
cors: true
authorizer: aws_iam
Here we are adding our newly added create function to the configuration. We specify that it
handles post requests at the /notes endpoint. We set CORS support to true. This is
because our frontend is going to be served from a different domain. As the authorizer we are
going to restrict access to our API based on the user’s IAM credentials. We will touch on this
and how our User Pool works with this, in the Cognito Identity Pool chapter.
Test
Now we are ready to test our new API. To be able to test it on our local we are going to mock
the input parameters.
$ mkdir mocks
$ cd mocks
{
"body": "{\"content\":\"hello
world\",\"attachment\":\"hello.jpg\"}",
"requestContext": {
"identity": {
"cognitoIdentityId": "USER-SUB-1234"
}
}
}
You might have noticed that the body and requestContext fields are the ones we used in
our create function. In this case the cognitoIdentityId field is just a string we are going to
use as our userId . We can use any string here; just make sure to use the same one when we
test our other functions.
And to invoke our function we run the following in the root directory.
If you have multiple profiles for your AWS SDK credentials, you will need to explicitly pick one.
Use the following command instead:
Where myProfile is the name of the AWS profile you want to use. If you need more info on
how to work with AWS profiles in Serverless, refer to our Configure multiple AWS profiles
(/chapters/configure-multiple-aws-profiles.html) chapter.
{
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true
},
body: '{"userId":"USER-SUB-1234","noteId":"578eb840-f70f-11e6-9d1a-
1359b3b22944","content":"hello
world","attachment":"hello.jpg","createdAt":1487800950620}'
}
Make a note of the noteId in the response. We are going to use this newly created note in the
next chapter.
$ mkdir libs
$ cd libs
This will manage building the response objects for both success and failure cases with the
proper HTTP status code and headers.
return dynamoDb[action](params).promise();
}
Here we are using the promise form of the DynamoDB methods. Promises are a method for
managing asynchronous code that serve as an alternative to the standard callback function
syntax. It will make our code a lot easier to read.
Now, we’ll go back to our create.js and use the helper functions we created. Our
create.js should now look like the following.
try {
await dynamoDbLib.call("put", params);
callback(null, success(params.Item));
} catch (e) {
callback(null, failure({ status: false }));
}
}
Next, we are going to write the API to get a note given its id.
Common Issues
If you see a statusCode: 500 response when you invoke your function, here is how to
debug it. The error is generated by our code in the catch block. Adding a console.log
like so, should give you a clue about what the issue is.
catch(e) {
console.log(e);
callback(null, failure({status: false}));
}
try {
const result = await dynamoDbLib.call("get", params);
if (result.Item) {
// Return the retrieved item
callback(null, success(result.Item));
} else {
callback(null, failure({ status: false, error: "Item not found."
}));
}
} catch (e) {
callback(null, failure({ status: false }));
}
}
This follows exactly the same structure as our previous create.js function. The major
difference here is that we are doing a dynamoDbLib.call('get', params) to get a note
object given the noteId and userId that is passed in through the request.
get:
# Defines an HTTP API endpoint that calls the main function in
get.js
# - path: url path is /notes/{id}
# - method: GET request
handler: get.main
events:
- http:
path: notes/{id}
method: get
cors: true
authorizer: aws_iam
This defines our get note API. It adds a GET request handler with the endpoint /notes/{id} .
Test
To test our get note API we need to mock passing in the noteId parameter. We are going to
use the noteId of the note we created in the previous chapter and add in a
pathParameters block to our mock. So it should look similar to the one below. Replace the
value of id with the id you received when you invoked the previous create.js function.
{
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true
},
body: '{"attachment":"hello.jpg","content":"hello
world","createdAt":1487800950620,"noteId":"578eb840-f70f-11e6-9d1a-
1359b3b22944","userId":"USER-SUB-1234"}'
}
Next, let’s create an API to list all the notes a user has.
try {
const result = await dynamoDbLib.call("query", params);
// Return the matching list of items in response body
callback(null, success(result.Items));
} catch (e) {
callback(null, failure({ status: false }));
}
}
This is pretty much the same as our get.js except we only pass in the userId in the
DynamoDB query call.
list:
# Defines an HTTP API endpoint that calls the main function in
list.js
# - path: url path is /notes
# - method: GET request
handler: list.main
events:
- http:
path: notes
method: get
cors: true
authorizer: aws_iam
Test
Create a mocks/list-event.json file and add the following.
{
"requestContext": {
"identity": {
"cognitoIdentityId": "USER-SUB-1234"
}
}
}
And invoke our function from the root directory of the project.
$ serverless invoke local --function list --path mocks/list-event.json
{
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true
},
body: '[{"attachment":"hello.jpg","content":"hello
world","createdAt":1487800950620,"noteId":"578eb840-f70f-11e6-9d1a-
1359b3b22944","userId":"USER-SUB-1234"}]'
}
Note that this API returns an array of note objects as opposed to the get.js function that
returns just a single note object.
This should look similar to the create.js function. Here we make an update DynamoDB
call with the new content and attachment values in the params .
update:
# Defines an HTTP API endpoint that calls the main function in
update.js
# - path: url path is /notes/{id}
# - method: PUT request
handler: update.main
events:
- http:
path: notes/{id}
method: put
cors: true
authorizer: aws_iam
Here we are adding a handler for the PUT request to the /notes/{id} endpoint.
Test
Create a mocks/update-event.json file and add the following.
Also, don’t forget to use the noteId of the note we have been using in place of the id in the
pathParameters block.
{
"body": "{\"content\":\"new world\",\"attachment\":\"new.jpg\"}",
"pathParameters": {
"id": "578eb840-f70f-11e6-9d1a-1359b3b22944"
},
"requestContext": {
"identity": {
"cognitoIdentityId": "USER-SUB-1234"
}
}
}
And we invoke our newly created function from the root directory.
{
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true
},
body: '{"status":true}'
}
Next we are going to add an API to delete a note given its id.
try {
const result = await dynamoDbLib.call("delete", params);
callback(null, success({ status: true }));
} catch (e) {
callback(null, failure({ status: false }));
}
}
This makes a DynamoDB delete call with the userId & noteId key to delete the note.
Configure the API Endpoint
Open the serverless.yml file and append the following to it.
delete:
# Defines an HTTP API endpoint that calls the main function in
delete.js
# - path: url path is /notes/{id}
# - method: DELETE request
handler: delete.main
events:
- http:
path: notes/{id}
method: delete
cors: true
authorizer: aws_iam
Test
Create a mocks/delete-event.json file and add the following.
Just like before we’ll use the noteId of our note in place of the id in the pathParameters
block.
{
"pathParameters": {
"id": "578eb840-f70f-11e6-9d1a-1359b3b22944"
},
"requestContext": {
"identity": {
"cognitoIdentityId": "USER-SUB-1234"
}
}
}
Invoke our newly created function from the root directory.
{
statusCode: 200,
headers:
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true
},
body: '{"status":true}'
}
Now that our APIs are complete; we’ll deploy them next.
$ serverless deploy
If you have multiple profiles for your AWS SDK credentials, you will need to explicitly pick one.
Use the following command instead:
Where myProfile is the name of the AWS profile you want to use. If you need more info on
how to work with AWS profiles in Serverless, refer to our Configure multiple AWS profiles
(/chapters/configure-multiple-aws-profiles.html) chapter.
Near the bottom of the output for this command, you will find the Service Information.
Service Information
service: notes-app-api
stage: prod
region: us-east-1
api keys:
None
endpoints:
POST - https://ly55wbovq4.execute-api.us-east-
1.amazonaws.com/prod/notes
GET - https://ly55wbovq4.execute-api.us-east-
1.amazonaws.com/prod/notes/{id}
GET - https://ly55wbovq4.execute-api.us-east-
1.amazonaws.com/prod/notes
PUT - https://ly55wbovq4.execute-api.us-east-
1.amazonaws.com/prod/notes/{id}
DELETE - https://ly55wbovq4.execute-api.us-east-
1.amazonaws.com/prod/notes/{id}
functions:
notes-app-api-prod-create
notes-app-api-prod-get
notes-app-api-prod-list
notes-app-api-prod-update
notes-app-api-prod-delete
This has a list of the API endpoints that were created. Make a note of these endpoints as we are
going to use them later while creating our frontend. Also make a note of the region and the id in
these endpoints, we are going to use them in the coming chapters. In our case, us-east-1 is
our API Gateway Region and ly55wbovq4 is our API Gateway ID.
For example, to deploy the list function again, we can run the following.
Now before we test our APIs we have one final thing to set up. We need to ensure that our
users can securely access the AWS resources we have created so far. Let’s look at setting up a
Cognito Identity Pool.
Amazon Cognito Federated Identities enables developers to create unique identities for your
users and authenticate them with federated identity providers. With a federated identity, you
can obtain temporary, limited-privilege AWS credentials to securely access other AWS services
such as Amazon DynamoDB, Amazon S3, and Amazon API Gateway.
In this chapter, we are going to create a federated Cognito Identity Pool. We will be using our
User Pool as the identity provider. We could also use Facebook, Google, or our own custom
identity provider. Once a user is authenticated via our User Pool, the Identity Pool will attach
an IAM Role to the user. We will define a policy for this IAM Role to grant access to the S3
bucket and our API. This is the Amazon way of securing your resources.
Create Pool
From your AWS Console (https://console.aws.amazon.com) and select Cognito from the list of
services.
Select Manage Federated Identities.
Enter an Identity pool name.
Select Authentication providers. Under Cognito tab, enter User Pool ID and App Client ID of
the User Pool created in the Create a Cognito user pool (/chapters/create-a-cognito-user-
pool.html) chapter. Select Create Pool.
Now we need to specify what AWS resources are accessible for users with temporary
credentials obtained from the Cognito Identity Pool.
Select View Details. Two Role Summary sections are expanded. The top section summarizes
the permission policy for authenticated users, and the bottom section summarizes that for
unauthenticated users.
Select View Policy Document in the top section. Then select Edit.
It will warn you to read the documentation. Select Ok to edit.
Add the following policy into the editor. Replace
YOUR_S3_UPLOADS_BUCKET_NAME with the bucket name from the Create an S3 bucket for
file uploads (/chapters/create-an-s3-bucket-for-file-uploads.html) chapter. And replace the
YOUR_API_GATEWAY_REGION and YOUR_API_GATEWAY_ID with the ones that you get after
you deployed your API in the last chapter.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"mobileanalytics:PutEvents",
"cognito-sync:*",
"cognito-identity:*"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::YOUR_S3_UPLOADS_BUCKET_NAME/${cognito-
identity.amazonaws.com:sub}*"
]
},
{
"Effect": "Allow",
"Action": [
"execute-api:Invoke"
],
"Resource": [
"arn:aws:execute-
api:YOUR_API_GATEWAY_REGION:*:YOUR_API_GATEWAY_ID/*"
]
}
]
}
So effectively we are telling AWS that an authenticated user has access to two resources.
1. Files in our S3 bucket that are prefixed with their federated identity id
2. And, the APIs we deployed using API Gateway
One other thing to note is that the federated identity id is a UUID that is assigned by our
Identity Pool. This is the id ( event.requestContext.identity.cognitoIdentityId )
that we were using as our user id back when we were creating our APIs.
Select Allow.
Our Cognito Identity Pool should now be created. Let’s find out the Identity Pool ID.
Select Dashboard from the left panel, then select Edit identity pool.
Take a note of the Identity pool ID which will be required in the later chapters.
Now before we test our serverless API let’s take a quick look at the Cognito User Pool and
Cognito Identity Pool and make sure we’ve got a good idea of the two concepts and the
differences between them.
Amazon Cognito User Pool makes it easy for developers to add sign-up and sign-in
functionality to web and mobile applications. It serves as your own identity provider to
maintain a user directory. It supports user registration and sign-in, as well as provisioning
identity tokens for signed-in users.
Amazon Cognito Federated Identities enables developers to create unique identities for your
users and authenticate them with federated identity providers. With a federated identity, you
can obtain temporary, limited-privilege AWS credentials to securely access other AWS
services such as Amazon DynamoDB, Amazon S3, and Amazon API Gateway.
Unfortunately they are both a bit vague and confusingly similar. Here is a more practical
description of what they are.
User Pool
Say you were creating a new web or mobile app and you were thinking about how to handle
user registration, authentication, and account recovery. This is where Cognito User Pools
would come in. Cognito User Pool handles all of this and as a developer you just need to use the
SDK to retrieve user related information.
Identity Pool
Cognito Identity Pool (or Cognito Federated Identities) on the other hand is a way to authorize
your users to use the various AWS services. Say you wanted to allow a user to have access to
your S3 bucket so that they could upload a file; you could specify that while creating an Identity
Pool. And to create these levels of access, the Identity Pool has its own concept of an identity
(or user). The source of these identities (or users) could be a Cognito User Pool or even
Facebook or Google.
So in summary; the Cognito User Pool stores all your users which then plugs into your Cognito
Identity Pool which can give your users access to your AWS services.
Now that we have a good understanding of how our users will be handled, let’s finish up our
backend by testing our APIs.
To be able to hit our API endpoints securely, we need to follow these steps.
These steps can be a bit tricky to do by hand. So we created a simple tool called AWS API
Gateway Test CLI (https://github.com/AnomalyInnovations/aws-api-gateway-cli-test).
We need to pass in quite a bit of our info to complete the above steps.
Use the username and password of the user created in the Create a Cognito test user
(/chapters/create-a-cognito-test-user.html) chapter.
Replace YOUR_COGNITO_USER_POOL_ID, YOUR_COGNITO_APP_CLIENT_ID, and
YOUR_COGNITO_REGION with the values from the Create a Cognito user pool
(/chapters/create-a-cognito-user-pool.html) chapter. In our case the region is us-east-
1 .
Replace YOUR_IDENTITY_POOL_ID with the one from the Create a Cognito identity pool
(/chapters/create-a-cognito-identity-pool.html) chapter.
Use the YOUR_API_GATEWAY_URL and YOUR_API_GATEWAY_REGION with the ones
from the Deploy the APIs (/chapters/deploy-the-apis.html) chapter. In our case the URL is
https://ly55wbovq4.execute-api.us-east-1.amazonaws.com/prod and the
region is us-east-1 .
While this might look intimidating, just keep in mind that behind the scenes all we are doing is
generating some security headers before making a basic HTTP request. You’ll see more of this
process when we connect our React.js app to our API backend.
If you are on Windows, use the command below. The space between each option is very
important.
And that’s it for the backend! Next we are going to move on to creating the frontend of our app.
Common Issues
This is the most common issue we come across and it is a bit cryptic and can be hard to
debug. Here are a few things to check before you start debugging:
There are no trailing slashes for YOUR_API_GATEWAY_URL . In our case, the URL is
https://ly55wbovq4.execute-api.us-east-1.amazonaws.com/prod . Notice
that it does not end with a / .
If you’re on Windows and are using Git Bash, try adding a trailing slash to
YOUR_API_GATEWAY_URL while removing the leading slash from --path-
template . In our case, it would result in --invoke-url
https://ly55wbovq4.execute-api.us-east-1.amazonaws.com/prod/ --
path-template notes . You can follow the discussion on this here
(https://github.com/AnomalyInnovations/serverless-stack-
com/issues/112#issuecomment-345996566).
There is a good chance that this error is happening even before our Lambda functions are
invoked. So we can start by making sure our IAM Roles are configured properly for our
Identity Pool. Follow the steps as detailed in our Debugging Serverless API Issues
(/chapters/debugging-serverless-api-issues.html#missing-iam-policy) chapter to ensure
that your IAM Roles have the right set of permissions.
Finally, make sure to look at the comment thread below. We’ve helped quite a few people
with similar issues and it’s very likely that somebody has run into a similar issue as you.
If instead your command fails with the {status: false} response; we can do a few
things to debug this. This response is generated by our Lambda functions when there is an
error. Add a console.log like so in your handler function.
catch(e) {
console.log(e);
callback(null, failure({status: false}));
}
And deploy it using serverless deploy function -f create . But we can’t see this
output when we make an HTTP request to it, since the console logs are not sent in our
HTTP responses. We need to check the logs to see this. We have a detailed chapter
(/chapters/api-gateway-and-lambda-logs.html#viewing-lambda-cloudwatch-logs) on
working with API Gateway and Lambda logs and you can read about how to check your
debug messages here (/chapters/api-gateway-and-lambda-logs.html#viewing-lambda-
cloudwatch-logs).
$ create-react-app notes-app-client
This should take a second to run, and it will create your new project and your new working
directory.
Now let’s go into our working directory and run our project.
$ cd notes-app-client
$ npm start
Create React App comes pre-loaded with a pretty convenient yet minimal development
environment. It includes live reloading, a testing framework, ES6 support, and much more
(https://github.com/facebookincubator/create-react-app#why-use-this).
Next, we are going to create our app icon and update the favicons.
For our example, we are going to start with a simple image and generate the various versions
from it.
To ensure that our icon works for most of our targeted platforms we’ll use a service called the
Favicon Generator (http://realfavicongenerator.net).
Click Favicon package to download the generated favicons. And copy all the files
over to your public/ directory.
Then replace the contents of public/manifest.json with the following:
{
"short_name": "Scratch",
"name": "Scratch Note Taking App",
"icons": [
{
"src": "android-chrome-192x192.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "android-chrome-256x256.png",
"sizes": "256x256",
"type": "image/png"
}
],
"start_url": "./index.html",
"display": "standalone",
"theme_color": "#ffffff",
"background_color": "#ffffff"
}
To include a file from the public/ directory in your HTML, Create React App needs the
%PUBLIC_URL% prefix.
And remove the following lines that reference the original favicon and theme color.
Finally head over to your browser and try the /favicon-32x32.png path to ensure that the
files were added correctly.
Next we are going to look into setting up custom fonts in our app.
This also gives us a chance to explore the structure of our newly created React.js app.
Let’s first include them in the HTML. Our React.js app is using a single HTML file.
Go ahead and edit public/index.html and add the following line in the
<head> section of the HTML to include the two typefaces.
Here we are referencing all the 5 different weights (300, 400, 600, 700, and 800) of the Open
Sans typeface.
Let’s change the current font in src/index.css for the body tag from font-
family: sans-serif; to the following.
font-family: "Open Sans", sans-serif;
font-size: 16px;
color: #333;
And let’s change the fonts for the header tags to our new Serif font by adding this
block to the css file.
Now if you just flip over to your browser with our new app, you should see the new fonts
update automatically; thanks to the live reloading.
We’ll stay on the theme of adding styles and set up our project with Bootstrap to ensure that
we have a consistent UI Kit to work with while building our app.
This installs the NPM package and adds the dependency to your package.json .
<link rel="stylesheet"
href="https://maxcdn.bootstrapcdn.com/bootstrap/latest/css/bootstrap.min.css
We’ll also tweak the styles of the form fields so that the mobile browser does not zoom in on
them on focus. We just need them to have a minimum font size of 16px to prevent the zoom.
select.form-control,
textarea.form-control,
input.form-control {
font-size: 16px;
}
input[type=file] {
width: 100%;
}
We are also setting the width of the input type file to prevent the page on mobile from
overflowing and adding a scrollbar.
Now if you head over to your browser, you might notice that the styles have shifted a bit. This is
because Bootstrap includes Normalize.css (http://necolas.github.io/normalize.css/) to have a
more consistent styles across browsers.
Next, we are going to create a few routes for our application and set up the React Router.
Let’s start by installing React Router. We are going to be using the React Router v4, the newest
version of React Router. React Router v4 can be used on the web and in native. So let’s install
the one for the web.
This installs the NPM package and adds the dependency to your package.json .
Now if you head over to your browser, your app should load just like before. The only difference
being that we are using React Router to serve out our pages.
Next we are going to look into how to organize the different pages of our app.
Add a Navbar
Let’s start by creating the outer chrome of our application by first adding a navigation bar to it.
We are going to use the Navbar (https://react-bootstrap.github.io/components.html#navbars)
React-Bootstrap component.
And go ahead and remove the code inside src/App.js and replace it with the
following. Also, you can go ahead and remove src/logo.svg .
Let’s also add a couple of line of styles to space things out a bit more.
Remove all the code inside src/App.css and replace it with the following:
.App {
margin-top: 15px;
}
.App .navbar-brand {
font-weight: bold;
}
This simply renders our homepage given that the user is not currently signed in.
.Home .lander {
padding: 80px 0;
text-align: center;
}
.Home .lander h1 {
font-family: "Open Sans", sans-serif;
font-weight: 600;
}
.Home .lander p {
color: #999;
}
This component uses this Switch component from React-Router that renders the first
matching route that is defined within it. For now we only have a single route, it looks for / and
renders the Home component when matched. We are also using the exact prop to ensure
that it matches the / route exactly. This is because the path / will also match any route that
starts with a / .
And add the following line below our Navbar component inside the render of
src/App.js .
<Routes />
So the render method of our src/App.js should now look like this.
render() {
return (
<div className="App container">
<Navbar fluid collapseOnSelect>
<Navbar.Header>
<Navbar.Brand>
<Link to="/">Scratch</Link>
</Navbar.Brand>
<Navbar.Toggle />
</Navbar.Header>
</Navbar>
<Routes />
</div>
);
}
This ensures that as we navigate to different routes in our app, the portion below the navbar
will change to reflect that.
Finally, head over to your browser and your app should show the brand new homepage of your
app.
Next we are going to add login and signup links to our navbar.
render() {
return (
<div className="App container">
<Navbar fluid collapseOnSelect>
<Navbar.Header>
<Navbar.Brand>
<Link to="/">Scratch</Link>
</Navbar.Brand>
<Navbar.Toggle />
</Navbar.Header>
<Navbar.Collapse>
<Nav pullRight>
<NavItem href="/signup">Signup</NavItem>
<NavItem href="/login">Login</NavItem>
</Nav>
</Navbar.Collapse>
</Navbar>
<Routes />
</div>
);
}
This adds two links to our navbar using the NavItem Bootstrap component. The
Navbar.Collapse component ensures that on mobile devices the two links will be collapsed.
Now if you flip over to your browser, you should see the two links in our navbar.
Unfortunately, they don’t do a whole lot when you click on them. We also need them to
highlight when we navigate to that page. To fix this we are going to use a useful feature of the
React-Router. We are going to use the Route component to detect when we are on a certain
page and then render based on it. Since we are going to do this twice, let’s make this into a
component that can be re-used.
1. We look at the href for the NavItem and check if there is a match.
2. React-Router passes in a match object in case there is a match. We use that and set the
active prop for the NavItem .
3. React-Router also passes us a history object. We use this to navigate to the new page
using history.push .
And remove the NavItem from the header of src/App.js , so that the react-
bootstrap import looks like this.
<RouteNavItem href="/signup">Signup</RouteNavItem>
<RouteNavItem href="/login">Login</RouteNavItem>
And that’s it! Now if you flip over to your browser and click on the login link, you should see the
link highlighted in the navbar.
You’ll notice that we are not rendering anything on the page because we don’t have a login page
currently. We should handle the case when a requested page is not found.
Next let’s look at how to tackle handling 404s with our router.
Create a Component
Let’s start by creating a component that will handle this for us.
All this component does is print out a simple message for us.
.NotFound {
padding-top: 100px;
text-align: center;
}
Find the <Switch> block in src/Routes.js and add it as the last line in that
section.
This needs to always be the last line in the <Route> block. You can think of it as the route that
handles requests in case all the other routes before it have failed.
And include the NotFound component in the header by adding the following:
And that’s it! Now if you were to switch over to your browser and try clicking on the Login or
Signup buttons in the Nav you should see the 404 message that we have.
Next up, we are going to work on creating our login and sign up forms.
So let’s start by creating the basic form that’ll take the user’s email (as their username) and
password.
this.state = {
email: "",
password: ""
};
}
validateForm() {
return this.state.email.length > 0 && this.state.password.length >
0;
}
render() {
return (
<div className="Login">
<form onSubmit={this.handleSubmit}>
<FormGroup controlId="email" bsSize="large">
<ControlLabel>Email</ControlLabel>
<FormControl
autoFocus
type="email"
value={this.state.email}
onChange={this.handleChange}
/>
</FormGroup>
<FormGroup controlId="password" bsSize="large">
<ControlLabel>Password</ControlLabel>
<FormControl
value={this.state.password}
onChange={this.handleChange}
type="password"
/>
</FormGroup>
<Button
block
bsSize="large"
disabled={!this.validateForm()}
type="submit"
>
Login
</Button>
</form>
</div>
);
}
}
1. In the constructor of our component we create a state object. This will be where we’ll store
what the user enters in the form.
2. We then connect the state to our two fields in the form by setting this.state.email
and this.state.password as the value in our input fields. This means that when the
state changes, React will re-render these components with the updated value.
3. But to update the state when the user types something into these fields, we’ll call a handle
function named handleChange . This function grabs the id (set as controlId for the
<FormGroup> ) of the field being changed and updates its state with the value the user is
typing in. Also, to have access to the this keyword inside handleChange we store the
reference to an anonymous function like so: handleChange = (event) => { } .
4. We are setting the autoFocus flag for our email field, so that when our form loads, it sets
focus to this field.
5. We also link up our submit button with our state by using a validate function called
validateForm . This simply checks if our fields are non-empty, but can easily do
something more complicated.
6. Finally, we trigger our callback handleSubmit when the form is submitted. For now we
are simply suppressing the browsers default behavior on submit but we’ll do more here
later.
Now if we switch to our browser and navigate to the login page we should see our newly
created form.
Next, let’s connect our login form to our AWS Cognito set up.
export default {
cognito: {
USER_POOL_ID: "YOUR_COGNITO_USER_POOL_ID",
APP_CLIENT_ID: "YOUR_COGNITO_APP_CLIENT_ID"
}
};
And to load it into our login form simply import it by adding the following to the
header of our Login container in src/containers/Login.js .
login(email, password) {
const userPool = new CognitoUserPool({
UserPoolId: config.cognito.USER_POOL_ID,
ClientId: config.cognito.APP_CLIENT_ID
});
const user = new CognitoUser({ Username: email, Pool: userPool });
const authenticationData = { Username: email, Password: password };
const authenticationDetails = new
AuthenticationDetails(authenticationData);
1. It creates a new CognitoUserPool using the details from our config. And it creates a
new CognitoUser using the email that is passed in.
2. It then authenticates our user using the authentication details with the
user.authenticateUser method.
3. Since, the login call is asynchronous we return a Promise object. This way we can call this
method directly without fidgeting with callbacks.
Trigger Login onSubmit
To connect the above login method to our form simply replace our placeholder
handleSubmit method in src/containers/Login.js with the following.
try {
await this.login(this.state.email, this.state.password);
alert("Logged in");
} catch (e) {
alert(e);
}
}
1. We grab the email and password from this.state and call our login method
with it.
2. We use the await keyword to invoke the login method that returns a promise. And we
need to label our handleSubmit method as async .
Now if you try to login using the [email protected] user (that we created in the Create a
Cognito Test User (/chapters/create-a-cognito-test-user.html) chapter), you should see the
browser alert that tells you that the login was successful.
Next, we’ll take a look at storing the login state in our app.
Add the following to src/App.js right below the class App extends
Component { line.
constructor(props) {
super(props);
this.state = {
isAuthenticated: false
};
}
This initializes the isAuthenticated flag in the App’s state. And calling
userHasAuthenticated updates it. But for the Login container to call this method we
need to pass a reference of this method to it.
const childProps = {
isAuthenticated: this.state.isAuthenticated,
userHasAuthenticated: this.userHasAuthenticated
};
And pass them into our Routes component by replacing the following line in the
render method of src/App.js .
<Routes />
With this.
Currently, our Routes component does not do anything with the passed in childProps .
We need it to apply these props to the child component it is going to render. In this case we
need it to apply them to our Login component.
This simple component creates a Route where the child component that it renders contains
the passed in props. Let’s take a quick look at how this being done.
The Route component takes a prop called component that represents the component
that will be rendered when a matching route is found. We want our childProps to be
sent to this component.
The Route component can also take a render method in place of the component .
This allows us to control what is passed in to our component.
Based on this we can create a component that returns a Route and takes a component
and childProps prop. This allows us to pass in the component we want rendered and
the props that we want applied.
Finally, we take component (set as C ) and props (set as cProps ) and render inside
our Route using the inline function; props => <C {...props} {...cProps} /> .
Note, the props variable in this case is what the Route component passes us. Whereas,
the cProps is the childProps that want to set.
Now to use this component, we are going to include it in the routes where we need to have the
childProps passed in.
<RouteNavItem href="/signup">Signup</RouteNavItem>
<RouteNavItem href="/login">Login</RouteNavItem>
{this.state.isAuthenticated
? <NavItem onClick={this.handleLogout}>Logout</NavItem>
: [
<RouteNavItem key={1} href="/signup">
Signup
</RouteNavItem>,
<RouteNavItem key={2} href="/login">
Login
</RouteNavItem>
]}
Now if you refresh your page you should be logged out again. This is because we are not
initializing the state from the browser session. Let’s look at how to do that next.
await getUserToken(currentUser);
return true;
}
function getUserToken(currentUser) {
return new Promise((resolve, reject) => {
currentUser.getSession(function(err, session) {
if (err) {
reject(err);
return;
}
resolve(session.getIdToken().getJwtToken());
});
});
}
function getCurrentUser() {
const userPool = new CognitoUserPool({
UserPoolId: config.cognito.USER_POOL_ID,
ClientId: config.cognito.APP_CLIENT_ID
});
return userPool.getCurrentUser();
}
The authUser method is getting the current user from the Local Storage using the Cognito JS
SDK. We then get that user’s session and their user token in getUserToken . The
currentUser.getSession also refreshes the user session in case it has expired. Finally in
the authUser method we return true if we are able to authenticate the user and false if
the user is not logged in.
this.state = {
isAuthenticated: false,
isAuthenticating: true
};
Let’s include the authUser method that we created by adding it to the header of
src/App.js .
Now to load the user session we’ll add the following to our src/App.js .
async componentDidMount() {
try {
if (await authUser()) {
this.userHasAuthenticated(true);
}
}
catch(e) {
alert(e);
}
All this does is check if there is a valid user in the session. It then updates the
isAuthenticating flag once the process is complete.
render() {
const childProps = {
isAuthenticated: this.state.isAuthenticated,
userHasAuthenticated: this.userHasAuthenticated
};
return (
!this.state.isAuthenticating &&
<div className="App container">
<Navbar fluid collapseOnSelect>
<Navbar.Header>
<Navbar.Brand>
<Link to="/">Scratch</Link>
</Navbar.Brand>
<Navbar.Toggle />
</Navbar.Header>
<Navbar.Collapse>
<Nav pullRight>
{this.state.isAuthenticated
? <NavItem onClick={this.handleLogout}>Logout</NavItem>
: [
<RouteNavItem key={1} href="/signup">
Signup
</RouteNavItem>,
<RouteNavItem key={2} href="/login">
Login
</RouteNavItem>
]}
</Nav>
</Navbar.Collapse>
</Navbar>
<Routes childProps={childProps} />
</div>
);
}
Now if you head over to your browser and refresh the page, you should see that a user is logged
in.
Unfortunately, when we hit Logout and refresh the page; we are still logged in. To fix this we are
going to clear the session on logout next.
Here we are using the AWS Cognito JS SDK to log the user out by calling
currentUser.signOut() .
Next we’ll include that in our App component. Replace the import { authUser
} line in the header of src/App.js with:
this.userHasAuthenticated(false);
}
Now if you head over to your browser, logout and then refresh the page; you should be logged
out completely.
If you try out the entire login flow from the beginning you’ll notice that, we continue to stay on
the login page through out the entire process. Next, we’ll look at redirecting the page after we
login and logout to make the flow makes more sense.
We are going to use the history.push method that comes with React Router v4.
this.props.history.push("/");
try {
await this.login(this.state.email, this.state.password);
this.props.userHasAuthenticated(true);
this.props.history.push("/");
} catch (e) {
alert(e);
}
}
Now if you head over to your browser and try logging in, you should be redirected to the
homepage after you’ve been logged in.
Redirect to Login After Logout
Now we’ll do something very similar for the logout process. However, the App component
does not have access to the router props directly since it is not rendered inside a Route
component. To be able to use the router props in our App component we will need to use the
withRouter Higher-Order Component (https://facebook.github.io/react/docs/higher-order-
components.html) (or HOC). You can read more about the withRouter HOC here
(https://reacttraining.com/react-router/web/api/withRouter).
To use this HOC, we’ll change the way we export our App component.
With this.
this.props.history.push("/login");
this.userHasAuthenticated(false);
this.props.history.push("/login");
}
This redirects us back to the login page once the user logs out.
Now if you switch over to your browser and try logging out, you should be redirected to the
login page.
You might have noticed while testing this flow that since the login call has a bit of a delay, we
might need to give some feedback to the user that the login call is in progress. Let’s do that next.
this.state = {
isLoading: false,
email: "",
password: ""
};
And we’ll update it while we are logging in. So our handleSubmit method now
looks like so:
try {
await this.login(this.state.email, this.state.password);
this.props.userHasAuthenticated(true);
this.props.history.push("/");
} catch (e) {
alert(e);
this.setState({ isLoading: false });
}
}
export default ({
isLoading,
text,
loadingText,
className = "",
disabled = false,
...props
}) =>
<Button
className={`LoaderButton ${className}`}
disabled={disabled || isLoading}
{...props}
>
{isLoading && <Glyphicon glyph="refresh" className="spinning" />}
{!isLoading ? text : loadingText}
</Button>;
This is a really simple component that takes a isLoading flag and the text that the button
displays in the two states (the default state and the loading state). The disabled prop is a
result of what we have currently in our Login button. And we ensure that the button is
disabled when isLoading is true . This makes it so that the user can’t click it while we are in
the process of logging them in.
And let’s add a couple of styles to animate our loading icon.
.LoaderButton .spinning.glyphicon {
margin-right: 7px;
top: 2px;
animation: spin 1s infinite linear;
}
@keyframes spin {
from { transform: scale(1) rotate(0deg); }
to { transform: scale(1) rotate(360deg); }
}
This spins the refresh Glyphicon infinitely with each spin taking a second. And by adding these
styles as a part of the LoaderButton we keep them self contained within the component.
<Button
block
bsSize="large"
disabled={!this.validateForm()}
type="submit"
>
Login
</Button>
<LoaderButton
block
bsSize="large"
disabled={!this.validateForm()}
type="submit"
isLoading={this.state.isLoading}
text="Login"
loadingText="Logging in…"
/>
Also, import the LoaderButton in the header. And remove the reference to the
Button component.
And now when we switch over to the browser and try logging in, you should see the
intermediate state before the login completes.
1. The user types in their email, password, and confirms their password.
2. We sign them up using AWS Cognito and get a user object in return.
3. We then render a form to accept the confirmation code that AWS Cognito has emailed to
them.
this.state = {
isLoading: false,
email: "",
password: "",
confirmPassword: "",
confirmationCode: "",
newUser: null
};
}
validateForm() {
return (
this.state.email.length > 0 &&
this.state.password.length > 0 &&
this.state.password === this.state.confirmPassword
);
}
validateConfirmationForm() {
return this.state.confirmationCode.length > 0;
}
renderConfirmationForm() {
return (
<form onSubmit={this.handleConfirmationSubmit}>
<FormGroup controlId="confirmationCode" bsSize="large">
<ControlLabel>Confirmation Code</ControlLabel>
<FormControl
autoFocus
type="tel"
value={this.state.confirmationCode}
onChange={this.handleChange}
/>
<HelpBlock>Please check your email for the code.</HelpBlock>
</FormGroup>
<LoaderButton
block
bsSize="large"
disabled={!this.validateConfirmationForm()}
type="submit"
isLoading={this.state.isLoading}
text="Verify"
loadingText="Verifying…"
/>
</form>
);
}
renderForm() {
return (
<form onSubmit={this.handleSubmit}>
<FormGroup controlId="email" bsSize="large">
<ControlLabel>Email</ControlLabel>
<FormControl
autoFocus
type="email"
value={this.state.email}
onChange={this.handleChange}
/>
</FormGroup>
<FormGroup controlId="password" bsSize="large">
<ControlLabel>Password</ControlLabel>
<FormControl
value={this.state.password}
onChange={this.handleChange}
type="password"
/>
</FormGroup>
<FormGroup controlId="confirmPassword" bsSize="large">
<ControlLabel>Confirm Password</ControlLabel>
<FormControl
value={this.state.confirmPassword}
onChange={this.handleChange}
type="password"
/>
</FormGroup>
<LoaderButton
block
bsSize="large"
disabled={!this.validateForm()}
type="submit"
isLoading={this.state.isLoading}
text="Signup"
loadingText="Signing up…"
/>
</form>
);
}
render() {
return (
<div className="Signup">
{this.state.newUser === null
? this.renderForm()
: this.renderConfirmationForm()}
</div>
);
}
}
Most of the things we are doing here are fairly straightforward but let’s go over them quickly.
1. Since we need to show the user a form to enter the confirmation code, we are conditionally
rendering two forms based on if we have a user object or not.
2. We are using the LoaderButton component that we created earlier for our submit
buttons.
3. Since we have two forms we have two validation methods called validateForm and
validateConfirmationForm .
4. We are setting the autoFocus flags on the email and the confirmation code fields.
.Signup form {
margin: 0 auto;
max-width: 320px;
}
}
Now if we switch to our browser and navigate to the signup page we should see our newly
created form. Try filling it in and ensure that it shows the confirmation code form as well.
try {
const newUser = await this.signup(this.state.email,
this.state.password);
this.setState({
newUser: newUser
});
} catch (e) {
alert(e);
}
try {
await this.confirm(this.state.newUser,
this.state.confirmationCode);
await this.authenticate(
this.state.newUser,
this.state.email,
this.state.password
);
this.props.userHasAuthenticated(true);
this.props.history.push("/");
} catch (e) {
alert(e);
this.setState({ isLoading: false });
}
}
signup(email, password) {
const userPool = new CognitoUserPool({
UserPoolId: config.cognito.USER_POOL_ID,
ClientId: config.cognito.APP_CLIENT_ID
});
resolve(result.user);
})
);
}
confirm(user, confirmationCode) {
return new Promise((resolve, reject) =>
user.confirmRegistration(confirmationCode, true, function(err,
result) {
if (err) {
reject(err);
return;
}
resolve(result);
})
);
}
import {
AuthenticationDetails,
CognitoUserPool
} from "amazon-cognito-identity-js";
import config from "../config";
1. In handleSubmit we make a call to signup a user. This creates a new user object.
5. Use the email and password to authenticate the newly created user using the newUser
object that we had previously saved in the state.
Now if you were to switch over to your browser and try signing up for a new account it should
redirect you to the homepage after sign up successfully completes.
A quick note on the signup flow here. If the user refreshes their page at the confirm step, they
won’t be able to get back and confirm that account. It forces them to create a new account
instead. We are keeping things intentionally simple here but you can fix this by creating a
separate page that handles the confirm step based on the email address. Here
(http://docs.aws.amazon.com/cognito/latest/developerguide/using-amazon-cognito-user-
identity-pools-javascript-examples.html#using-amazon-cognito-identity-user-pools-
javascript-example-confirming-user) is some sample code that you can use to confirm an
unauthenticated user.
However, while developing you might run into cases where you need to manually confirm an
unauthenticated user. You can do that with the AWS CLI using the following command.
Just be sure to use your Cognito User Pool Id and the email you used to create the account.
First we are going to create the form for a note. It’ll take some content and a file as an
attachment.
this.file = null;
this.state = {
isLoading: null,
content: ""
};
}
validateForm() {
return this.state.content.length > 0;
}
handleChange = event => {
this.setState({
[event.target.id]: event.target.value
});
}
render() {
return (
<div className="NewNote">
<form onSubmit={this.handleSubmit}>
<FormGroup controlId="content">
<FormControl
onChange={this.handleChange}
value={this.state.content}
componentClass="textarea"
/>
</FormGroup>
<FormGroup controlId="file">
<ControlLabel>Attachment</ControlLabel>
<FormControl onChange={this.handleFileChange} type="file"
/>
</FormGroup>
<LoaderButton
block
bsStyle="primary"
bsSize="large"
disabled={!this.validateForm()}
type="submit"
isLoading={this.state.isLoading}
text="Create"
loadingText="Creating…"
/>
</form>
</div>
);
}
}
Everything is fairly standard here, except for the file input. Our form elements so far have been
controlled components (https://facebook.github.io/react/docs/forms.html), as in their value is
directly controlled by the state of the component. The file input simply calls a different
onChange handler ( handleFileChange ) that saves the file object as a class property. We
use a class property instead of saving it in the state because the file object we save does not
change or drive the rendering of our component.
Currently, our handleSubmit does not do a whole lot other than limiting the file size of our
attachment. We are going to define this in our config.
So add the following to our src/config.js below the export default { line.
MAX_ATTACHMENT_SIZE: 5000000,
.NewNote form {
padding-bottom: 15px;
}
In our React app we do step 1 by calling the authUser method when the App component
loads. So let’s do step 2 and use the userToken to generate temporary IAM credentials.
function getAwsCredentials(userToken) {
const authenticator = `cognito-idp.${config.cognito
.REGION}.amazonaws.com/${config.cognito.USER_POOL_ID}`;
return AWS.config.credentials.getPromise();
}
This method takes the userToken and uses our Cognito User Pool as the authenticator to
request a set of temporary credentials.
To get our AWS credentials we need to add the following to our src/config.js in
the cognito block. Make sure to replace YOUR_IDENTITY_POOL_ID with your Identity
pool ID from the Create a Cognito identity pool (/chapters/create-a-cognito-identity-pool.html)
chapter and YOUR_COGNITO_REGION with the region your Cognito User Pool is in.
REGION: "YOUR_COGNITO_REGION",
IDENTITY_POOL_ID: "YOUR_IDENTITY_POOL_ID",
await getAwsCredentials(userToken);
return true;
}
We are passing getAwsCredentials the userToken that Cognito gives us to generate the
temporary credentials. These credentials are valid till the
AWS.config.credentials.expireTime . So we simply check to ensure our credentials are
still valid before requesting a new set. This also ensures that we don’t generate the
userToken every time the authUser method is called.
To create this signature we are going to need the Crypto NPM package.
→ sigV4Client.js (https://raw.githubusercontent.com/AnomalyInnovations/serverless-
stack-demo-client/8e808f02c8ccd3037b35af4da257f0d47e1c9fe9/src/libs/sigV4Client.js)
This file can look a bit intimidating at first but it is just using the temporary credentials and the
request parameters to create the necessary signed headers. To create a new sigV4Client
we need to pass in the following:
// Pseudocode
sigV4Client.newClient({
// Your AWS temporary access key
accessKey,
// Your AWS temporary secret key
secretKey,
// Your AWS temporary session token
sessionToken,
// API Gateway region
region,
// API Gateway URL
endpoint
});
And to sign a request you need to use the signRequest method and pass in:
// Pseudocode
And signedRequest.headers should give you the signed headers that you need to make
the request.
Now let’s go ahead and use the sigV4Client and invoke API Gateway.
return results.json();
}
We are simply following the steps to make a signed request to API Gateway here. We first
ensure the user is authenticated and we generate their temporary credentials using
authUser . Then using the sigV4Client we sign our request. We then use the signed
headers to make a HTTP fetch request.
Also, add the details of our API to src/config.js above the cognito: { line.
Remember to replace YOUR_API_GATEWAY_URL and YOUR_API_GATEWAY_REGION with the
ones from the Deploy the APIs (/chapters/deploy-the-apis.html) chapter.
apiGateway: {
URL: "YOUR_API_GATEWAY_URL",
REGION: "YOUR_API_GATEWAY_REGION"
},
try {
await this.createNote({
content: this.state.content
});
this.props.history.push("/");
} catch (e) {
alert(e);
this.setState({ isLoading: false });
}
}
createNote(note) {
return invokeApig({
path: "/notes",
method: "POST",
body: note
});
}
1. We make our create call in createNote by making a POST request to /notes and
passing in our note object.
2. For now the note object is simply the content of the note. We are creating these notes
without an attachment for now.
And that’s it; if you switch over to your browser and try submitting your form, it should
successfully navigate over to our homepage.
Next let’s upload our file to S3 and add an attachment to our note.
For help and discussion
We are going to use the AWS JS SDK to upload our files to S3. The S3 Bucket that we created
previously, is secured using our Cognito Identity Pool. So before we can upload a file we should
ensure that our user is authenticated and has a set of temporary IAM credentials. This is exactly
the same process as when we were making secured requests to our API in the Connect to API
Gateway with IAM auth (/chapters/connect-to-api-gateway-with-iam-auth.html) chapter.
Upload to S3
Append the following in src/libs/awsLib.js .
return s3
.upload({
Key: filename,
Body: file,
ContentType: file.type,
ACL: "public-read"
})
.promise();
}
And add this to our src/config.js above the apiGateway block. Make sure to
replace YOUR_S3_UPLOADS_BUCKET_NAME with the your S3 Bucket name from the Create an
S3 bucket for file uploads (/chapters/create-an-s3-bucket-for-file-uploads.html) chapter.
s3: {
BUCKET: "YOUR_S3_UPLOADS_BUCKET_NAME"
},
2. Generates a unique file name prefixed with the identityId . This is necessary to secure
the files on a per-user basis.
3. Upload the file to S3 and set its permissions to public-read to ensure that we can
download it later.
try {
const uploadedFilename = this.file
? (await s3Upload(this.file)).Location
: null;
await this.createNote({
content: this.state.content,
attachment: uploadedFilename
});
this.props.history.push("/");
} catch (e) {
alert(e);
this.setState({ isLoading: false });
}
}
And make sure to include s3Upload in the header by replacing the import {
invokeApig } line with this:
2. Use the returned URL and add that to the note object when we create the note.
Now when we switch over to our browser and submit the form with an uploaded file we should
see the note being created successfully. And the app being redirected to the homepage.
Next up we are going to make sure we clear out AWS credentials that are cached by the AWS JS
SDK before we move on.
For help and discussion
But we need to make sure that we clear out those credentials when we logout. If we don’t, the
next user that logs in on the same browser, might end up with the incorrect credentials.
if (AWS.config.credentials) {
AWS.config.credentials.clearCachedId();
AWS.config.credentials = new AWS.CognitoIdentityCredentials({});
}
}
Here we are clearing the AWS JS SDK cache and resetting the credentials that it saves in the
browser’s Local Storage.
Next up we are going to allow users to see a list of the notes they’ve created.
Currently, our Home containers is very simple. Let’s add the conditional rendering in there.
this.state = {
isLoading: true,
notes: []
};
}
renderNotesList(notes) {
return null;
}
renderLander() {
return (
<div className="lander">
<h1>Scratch</h1>
<p>A simple note taking app</p>
</div>
);
}
renderNotes() {
return (
<div className="notes">
<PageHeader>Your Notes</PageHeader>
<ListGroup>
{!this.state.isLoading &&
this.renderNotesList(this.state.notes)}
</ListGroup>
</div>
);
}
render() {
return (
<div className="Home">
{this.props.isAuthenticated ? this.renderNotes() :
this.renderLander()}
</div>
);
}
}
2. Store our notes in the state. Currently, it’s empty but we’ll be calling our API for it.
3. Once we fetch our list we’ll use the renderNotesList method to render the items in the
list.
And that’s our basic setup! Head over to the browser and the homepage of our app should
render out an empty list.
Next we are going to fill it up with our API.
async componentDidMount() {
if (!this.props.isAuthenticated) {
return;
}
try {
const results = await this.notes();
this.setState({ notes: results });
} catch (e) {
alert(e);
}
notes() {
return invokeApig({ path: "/notes" });
}
All this does, is make a GET request to /notes on componentDidMount and puts the
results in the notes object in the state.
Now let’s render the results.
renderNotesList(notes) {
return [{}].concat(notes).map(
(note, i) =>
i !== 0
? <ListGroupItem
key={note.noteId}
href={`/notes/${note.noteId}`}
onClick={this.handleNoteClick}
header={note.content.trim().split("\n")[0]}
>
{"Created: " + new Date(note.createdAt).toLocaleString()}
</ListGroupItem>
: <ListGroupItem
key="new"
href="/notes/new"
onClick={this.handleNoteClick}
>
<h4>
<b>{"\uFF0B"}</b> Create a new note
</h4>
</ListGroupItem>
);
}
1. It always renders a Create a new note button as the first item in the list (even if the list is
empty). We do this by concatenating an array with an empty object with our notes array.
2. We render the first line of each note as the ListGroupItem header by doing
note.content.trim().split('\n')[0] .
3. And onClick for each of the list items we navigate to their respective pages.
.Home .notes h4 {
font-family: "Open Sans", sans-serif;
font-weight: 600;
overflow: hidden;
line-height: 1.5;
white-space: nowrap;
text-overflow: ellipsis;
}
.Home .notes p {
color: #666;
}
Now head over to your browser and you should see your list displayed.
And if you click on the links they should take you to their respective pages.
Next up we are going to allow users to view and edit their notes.
The first thing we are going to need to do is load the note when our container loads. Just like
what we did in the Home container. So let’s get started.
Add the following line to src/Routes.js below our /notes/new route. We are
using the AppliedRoute component that we created in the Add the session to the state
(/chapters/add-the-session-to-the-state.html) chapter.
This is important because we are going to be pattern matching to extract our note id from the
URL.
By using the route path /notes/:id we are telling the router to send all matching routes to
our component Notes . This will also end up matching the route /notes/new with an id of
new . To ensure that doesn’t happen, we put our /notes/new route before the pattern
matching one.
Of course this component doesn’t exist yet and we are going to create it now.
this.file = null;
this.state = {
note: null,
content: ""
};
}
async componentDidMount() {
try {
const results = await this.getNote();
this.setState({
note: results,
content: results.content
});
} catch (e) {
alert(e);
}
}
getNote() {
return invokeApig({ path: `/notes/${this.props.match.params.id}`
});
}
render() {
return <div className="Notes" />;
}
}
All this does is load the note on componentDidMount and save it to the state. We get the id
of our note from the URL using the props automatically passed to us by React-Router in
this.props.match.params.id . The keyword id is a part of the pattern matching in our
route ( /notes/:id ).
And now if you switch over to your browser and navigate to a note that we previously created,
you’ll notice that the page renders an empty container.
validateForm() {
return this.state.content.length > 0;
}
formatFilename(str) {
return str.length < 50
? str
: str.substr(0, 20) + "..." + str.substr(str.length - 20,
str.length);
}
if (!confirmed) {
return;
}
render() {
return (
<div className="Notes">
{this.state.note &&
<form onSubmit={this.handleSubmit}>
<FormGroup controlId="content">
<FormControl
onChange={this.handleChange}
value={this.state.content}
componentClass="textarea"
/>
</FormGroup>
{this.state.note.attachment &&
<FormGroup>
<ControlLabel>Attachment</ControlLabel>
<FormControl.Static>
<a
target="_blank"
rel="noopener noreferrer"
href={this.state.note.attachment}
>
{this.formatFilename(this.state.note.attachment)}
</a>
</FormControl.Static>
</FormGroup>}
<FormGroup controlId="file">
{!this.state.note.attachment &&
<ControlLabel>Attachment</ControlLabel>}
<FormControl onChange={this.handleFileChange} type="file"
/>
</FormGroup>
<LoaderButton
block
bsStyle="primary"
bsSize="large"
disabled={!this.validateForm()}
type="submit"
isLoading={this.state.isLoading}
text="Save"
loadingText="Saving…"
/>
<LoaderButton
block
bsStyle="danger"
bsSize="large"
isLoading={this.state.isDeleting}
onClick={this.handleDelete}
text="Delete"
loadingText="Deleting…"
/>
</form>}
</div>
);
}
2. Inside the form we conditionally render the part where we display the attachment by using
this.state.note.attachment .
3. We form the attachment URL using formatFilename (since S3 gives us some very long
URLs).
4. We also added a delete button to allow users to delete the note. And just like the submit
button it too needs a flag that signals that the call is in progress. We call it isDeleting .
5. We handle attachments with a file input exactly like we did in the NewNote component.
6. Our delete button also confirms with the user if they want to delete the note using the
browser’s confirm dialog.
To complete this code, let’s add isLoading and isDeleting to the state.
this.state = {
isLoading: null,
isDeleting: null,
note: null,
content: ""
};
.Notes form {
padding-bottom: 15px;
}
And that’s it. If you switch over to your browser, you should see the note loaded.
saveNote(note) {
return invokeApig({
path: `/notes/${this.props.match.params.id}`,
method: "PUT",
body: note
});
}
event.preventDefault();
try {
if (this.file) {
uploadedFilename = (await s3Upload(this.file))
.Location;
}
await this.saveNote({
...this.state.note,
content: this.state.content,
attachment: uploadedFilename || this.state.note.attachment
});
this.props.history.push("/");
} catch (e) {
alert(e);
this.setState({ isLoading: false });
}
}
The code above is doing a couple of things that should be very similar to what we did in the
NewNote container.
1. If there is a file to upload we call s3Upload to upload it and save the URL.
2. We save the note by making PUT request with the note object to /notes/note_id
where we get the note_id from this.props.match.params.id .
Let’s switch over to our browser and give it a try by saving some changes.
You might have noticed that we are not deleting the old attachment when we upload a new one.
To keep things simple, we are leaving that bit of detail up to you. It should be pretty
straightforward. Check the AWS JS SDK Docs
(http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#deleteObject-
property) on how to a delete file from S3.
deleteNote() {
return invokeApig({
path: `/notes/${this.props.match.params.id}`,
method: "DELETE"
});
}
if (!confirmed) {
return;
}
try {
await this.deleteNote();
this.props.history.push("/");
} catch (e) {
alert(e);
this.setState({ isDeleting: false });
}
}
We are simply making a DELETE request to /notes/note_id where we get the id from
this.props.match.params.id . This calls our delete API and we redirect to the homepage
on success.
Now if you switch over to your browser and try deleting a note you should see it confirm your
action and then delete the note.
Again, you might have noticed that we are not deleting the attachment when we are deleting a
note. We are leaving that up to you to keep things simple. Check the AWS JS SDK Docs
(http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#deleteObject-
property) on how to a delete file from S3.
Now with our app nearly complete, we’ll look at securing some the pages of our app that
require a login. Currently if you visit a note page while you are logged out, it throws an ugly
error.
Instead, we would like it to redirect us to the login page and then redirect us back after we
login. Let’s look at how to do that next.
We also have a couple of pages that need to behave in sort of the same way. We want the user
to be redirected to the homepage if they type in the login ( /login ) or signup ( /signup )
URL. Currently, the login and sign up page end up loading even though the user is already
logged in.
There are many ways to solve the above problems. The simplest would be to just check the
conditions in our containers and redirect. But since we have a few containers that need the
same logic we can create a special route (like the AppliedRoute from the Add the session to
the state (/chapters/add-the-session-to-the-state.html) chapter) for it.
We are going to create two different route components to fix the problem we have.
1. A route called the AuthenticatedRoute, that checks if the user is authenticated before
routing.
2. And a component called the UnauthenticatedRoute, that ensures the user is not
authenticated.
This component is similar to the AppliedRoute component that we created in the Add the
session to the state (/chapters/add-the-session-to-the-state.html) chapter. The main difference
being that we look at the props that are passed in to check if a user is authenticated. If the user
is authenticated, then we simply render the passed in component. And if the user is not
authenticated, then we use the Redirect React Rotuer v4 component to redirect the user to
the login page. We also pass in the current path to the login page ( redirect in the
querystring). We will use this later to redirect us back after the user logs in.
Here we are checking to ensure that the user is not authenticated before we render the
component that is passed in. And in the case where the user is authenticated, we use the
Redirect component to simply send the user to the homepage.
Next, we are going to use the reference to redirect to the note page after we login.
Let’s start by adding a method to read the redirect URL from the querystring.
if (!results) {
return null;
}
if (!results[2]) {
return "";
}
This method takes the querystring param we want to read and returns it.
Now let’s update our Redirect component to use this when it redirects.
this.props.history.push("/");
And that’s it! Our app is ready to go live. Let’s look at how we are going to deploy it using our
serverless setup.
The basic setup we are going to be using will look something like this:
AWS provides quite a few services that can help us do the above. We are going to use S3
(https://aws.amazon.com/s3/) to host our assets, CloudFront
(https://aws.amazon.com/cloudfront/) to serve it, Route 53 (https://aws.amazon.com/route53/)
to manage our domain, and Certificate Manager (https://aws.amazon.com/certificate-
manager/) to handle our SSL certificate.
So let’s get started by first configuring our S3 bucket to upload the assets of our app.
A bucket can also be configured to host the assets in it as a static website and is automatically
assigned a publicly accessible URL. So let’s get started.
Select Create Bucket and pick a name for your application and select the US East (N. Virginia)
Region Region. Since our application is being served out using a CDN, the region should not
matter to us.
Go through the next steps and leave the defaults by clicking Next.
Now click on your newly created bucket from the list and navigate to its permissions panel by
clicking Permissions.
Add Permissions
Buckets by default are not publicly accessible, so we need to change the S3 Bucket Permission.
Select the Bucket Policy from the permissions panel.
Add the following bucket policy into the editor. Where notes-app-client is the
name of our S3 bucket. Make sure to use the name of your bucket here.
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadForGetBucketObjects",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::notes-app-client/*"]
}
]
}
And hit Save.
This panel also shows us where our app will be accessible. AWS assigns us a URL for our static
website. In this case the URL assigned to me is notes-app-client.s3-website-us-east-
1.amazonaws.com .
Now that our bucket is all set up and ready, let’s go ahead and upload our assets to it.
This packages all of our assets and places them in the build/ directory.
Upload to S3
Now to deploy simply run the following command; where YOUR_S3_DEPLOY_BUCKET_NAME is
the name of the S3 Bucket we created in the Create an S3 bucket (/chapters/create-an-s3-
bucket.html) chapter.
All this command does is that it syncs the build/ directory with our bucket on S3. Just as a
sanity check, go into the S3 section in your AWS Console
(https://console.aws.amazon.com/console/home) and check if your bucket has the files we just
uploaded.
And our app should be live on S3! If you head over to the URL assigned to you (in my case it is
http://notes-app-client.s3-website-us-east-1.amazonaws.com (http://notes-app-client.s3-
website-us-east-1.amazonaws.com)), you should see it live.
Next we’ll configure CloudFront to serve our app out globally.
You can grab the S3 website endpoint from the Static website hosting panel for your S3 bucket.
We had configured this in the previous chapter. Copy the URL in the Endpoint field.
And paste that URL in the Origin Domain Name field. In my case it is, http://notes-app-
client.s3-website-us-east-1.amazonaws.com .
And now scroll down the form and switch Compress Objects Automatically to Yes. This will
automatically Gzip compress the files that can be compressed and speed up the delivery of our
app.
Next, scroll down a bit further to set the Default Root Object to index.html .
And finally, hit Create Distribution.
It takes AWS a little while to create a distribution. But once it is complete you can find your
CloudFront Distribution by clicking on your newly created distribution from the list and looking
up its domain name.
And if you navigate over to that in your browser, you should see your app live.
Now before we move on there is one last thing we need to do. Currently, our static website
returns our index.html as the error page. We set this up back in the chapter where we
created our S3 bucket. However, it returns a HTTP status code of 404 when it does so. We want
to return the index.html but since the routing is handled by React Router; it does not make
sense that we return the 404 HTTP status code. One of the issues with this is that certain
corporate firewalls and proxies tend to block 4xx and 5xx responses.
To set up a custom error response, head over to the Error Pages tab in our Distribution.
And type in your new domain name in the Alternate Domain Names (CNAMEs) field.
Scroll down and hit Yes, Edit to save the changes.
Next, let’s point our domain to the CloudFront Distribution.
Select your domain from the list and hit Create Record Set in the details screen.
Leave the Name field empty since we are going to point our bare domain (without the www.) to
our CloudFront Distribution.
And select Alias as Yes since we are going to simply point this to our CloudFront domain.
In the Alias Target dropdown, select your CloudFront Distribution.
Create a new Record Set with the exact settings as before, except make sure to pick AAAA -
IPv6 address as the Type.
And hit Create to add your AAAA record set.
It can take around an hour to update the DNS records but once it’s done, you should be able to
access your app through your domain.
Next up, we’ll take a quick look at ensuring that our www. domain also directs to our app.
To create a www version of our domain and have it redirect we are going to create a new S3
Bucket and a new CloudFront Distribution. This new S3 Bucket will simply respond with a
redirect to our main domain using the redirection feature that S3 Buckets have.
But unlike last time we are going to select the Redirect requests option and fill in the domain
we are going to be redirecting towards. This is the domain that we set up in our last chapter.
Also, make sure to copy the Endpoint as we’ll be needing this later.
And hit Save to make the changes. Next we’ll create a CloudFront Distribution to point to this
S3 redirect Bucket.
This time fill in www as the Name and select Alias as Yes. And pick your new CloudFront
Distribution from the Alias Target dropdown.
Add IPv6 Support
Just as before, we need to add an AAAA record to support IPv6.
Create a new Record Set with the exact same settings as before, except make sure to pick
AAAA - IPv6 address as the Type.
And that’s it! Just give it some time for the DNS to propagate and if you visit your www version
of your domain, it should redirect you to your non-www version.
Next, we’ll set up SSL and add HTTPS support for our domains.
Request a Certificate
Select Certificate Manager from the list of services in your AWS Console
(https://console.aws.amazon.com). Ensure that you are in the US East (N. Virginia) region. This
is because a certificate needs to be from this region for it to work with CloudFront
(http://docs.aws.amazon.com/acm/latest/userguide/acm-regions.html).
If this is your first certificate, you’ll need to hit Get started. If not they hit Request a certificate
from the top.
And type in the name of our domain. Hit Add another name to this certificate and add our
www version of our domain as well. Hit Review and request once you are done.
On the next screen review to make sure you filled in the right domain names and hit Confirm
and request.
And finally on the Validation screen, AWS let’s you know which email addresses it’s going to
send emails to verify that it is your domain. Hit Continue, to send the verification emails.
Now since we are setting up a certificate for two domains (the non-www and www versions),
we’ll be receiving two emails with a link to verify that you own the domains. Make sure to hit I
Approve on both the emails.
Next, we’ll associate this certificate with our CloudFront Distributions.
Then switch the Viewer Protocol Policy to Redirect HTTP to HTTPS. And scroll down to the
bottom and hit Yes, Edit.
Now let’s do the same for our other CloudFront Distribution.
But leave the Viewer Protocol Policy as HTTP and HTTPS. This is because we want our users
to go straight to the HTTPS version of our non-www domain. As opposed to redirecting to the
HTTPS version of our www domain before redirecting again.
Open up the S3 Redirect Bucket we created in the last chapter. Head over to the Properties tab
and select Static website hosting.
Change the Protocol to https and hit Save.
And that’s it. Our app should be served out on our domain through HTTPS.
Next up, let’s look at the process of deploying updates to our app.
We need to do the last step since CloudFront caches our objects in its edge locations. So to
make sure that our users see the latest version, we need to tell CloudFront to invalidate it’s
cache in the edge locations.
Let’s start by making a couple of changes to our app and go through the process of deploying
them.
We are going to add a Login and Signup button to our lander to give users a clear call to action.
renderLander() {
return (
<div className="lander">
<h1>Scratch</h1>
<p>A simple note taking app</p>
<div>
<Link to="/login" className="btn btn-info btn-lg">
Login
</Link>
<Link to="/signup" className="btn btn-success btn-lg">
Signup
</Link>
</div>
</div>
);
}
Now that our app is built and ready in the build/ directory, let’s deploy to S3.
Upload to S3
Run the following from our working directory to upload our app to our main S3 Bucket. Make
sure to replace YOUR_S3_DEPLOY_BUCKET_NAME with the S3 Bucket we created in the
Create an S3 bucket (/chapters/create-an-s3-bucket.html) chapter.
To do this we’ll need the Distribution ID of both of our CloudFront Distributions. You can get it
by clicking on the distribution from the list of CloudFront Distributions.
Now we can use the AWS CLI to invalidate the cache of the two distributions. As of writing this,
the CloudFront portion of the CLI is in preview and needs to be enabled by running the
following. This only needs to be run once and not every time we deploy.
And to invalidate the cache we run the following. Make sure to replace
YOUR_CF_DISTRIBUTION_ID and YOUR_WWW_CF_DISTRIBUTION_ID with the ones from
above.
This invalidates our distribution for both the www and non-www versions of our domain. If you
click on the Invalidations tab, you should see your invalidation request being processed.
It can take a few minutes to complete. But once it is done, the updated version of our app
should be live.
And that’s it! We now have a set of commands we can run to deploy our updates. Let’s quickly
put them together so we can do it with one command.
Add the following in the scripts block above eject in the package.json .
Now simply run the following command from your project root when you want to deploy your
updates. It’ll build your app, upload it to S3, and invalidate the CloudFront cache.
Our app is now complete. And we have an easy way to update it!
We’d love to hear from you about your experience following this guide. Please send us any
comments or feedback you might have, via email (mailto:[email protected]). We’d love to
feature your comments here. Also, if you’d like us to cover any of the chapters or concepts in a
bit more detail, feel free to let us know (mailto:[email protected]).
The content on this site is kept up to date thanks in large part to our community and our
readers. Submit a Pull Request (https://github.com/AnomalyInnovations/serverless-stack-
com/compare) to fix any typos or errors you might find.
We rely on our GitHub repo for everything from hosting this site to code samples and
comments. Starring our repo (https://github.com/AnomalyInnovations/serverless-stack-
com) helps us get the word out.
Also, if you have any other ideas on how to contribute; feel free to let us know via email
(mailto:[email protected]).
While the hosted version of the tutorial and the code snippets are accurate, the sample project
repo that is linked at the bottom of each chapter is unfortunately not. We do however maintain
the past versions of the completed sample project repo. So you should be able to use those to
figure things out. All this info is also available on the releases page
(https://github.com/AnomalyInnovations/serverless-stack-com/releases) of our GitHub repo
(https://github.com/AnomalyInnovations/serverless-stack-com).
Versions
v1.2: Upgrade to Serverless Webpack v3
(https://59caac9bcf321c5b78f2c3e2--serverless-stack.netlify.com/)
(Current)
API (https://github.com/AnomalyInnovations/serverless-stack-demo-
api/releases/tag/v1.2)
Client (https://github.com/AnomalyInnovations/serverless-stack-demo-
client/releases/tag/v1.2) (unchanged)
API (https://github.com/AnomalyInnovations/serverless-stack-demo-
api/releases/tag/v1.1) (unchanged)
Client (https://github.com/AnomalyInnovations/serverless-stack-demo-
client/releases/tag/v1.1)
v1.0: IAM as authorizer (https://59caae01424ef20727c342ce--serverless-
stack.netlify.com/)
API (https://github.com/AnomalyInnovations/serverless-stack-demo-
api/releases/tag/v1.0)
Client (https://github.com/AnomalyInnovations/serverless-stack-demo-
client/releases/tag/v1.0)
API (https://github.com/AnomalyInnovations/serverless-stack-demo-
api/releases/tag/v0.9)
Client (https://github.com/AnomalyInnovations/serverless-stack-demo-
client/releases/tag/v0.9)
To help people stay up to date with the changes, we run the Serverless Stack newsletter
(http://eepurl.com/cEaBlf). The newsletter is a:
Types of Logs
There are 2 types of logs we usually take for granted in a monolithic environment.
Server logs
Web server logs maintain a history of requests, in the order they took place. Each log entry
contains the information about the request, including client IP address, request date/time,
request path, HTTP code, bytes served, user agent, etc.
Application logs
Application logs are a file of events that are logged by the web application. It usually
contains errors, warnings, and informational events. It could contain everything from
unexpected function failures, to key events for understanding how users behave.
In the serverless environment, we have lesser control over the underlying infrastructure,
logging is the only way to acquire knowledge on how the application is performing. Amazon
CloudWatch (https://aws.amazon.com/cloudwatch/) is a monitoring service to help you collect
and track metrics for your resources. Using the analogy of server logs and application logs, you
can roughly think of the API Gateway logs as your server logs and Lambda logs as your
application logs.
First, log in to your AWS Console (https://console.aws.amazon.com) and select IAM from the
list of services.
Go back to your AWS Console (https://console.aws.amazon.com) and select API Gateway from
the list of services.
To view API Gateway logs, log in to your AWS Console (https://console.aws.amazon.com) and
select CloudWatch from the list of services.
Select Logs from the left panel.
Select the log group prefixed with API-Gateway-Execution-Logs_ followed by the API Gateway
id.
You should see 300 log streams ordered by the last event time. This is the last time a request
was recorded. Select the first stream.
This shows you the log entries grouped by request.
Note that two consecutive groups of logs are not necessarily two consecutive requests in real
time. This is because there might be other requests that are processed in between these two
that were picked up by one of the other log streams.
To view Lambda logs, select Logs again from the left panel. Then select the first log group
prefixed with /aws/lambda/ followed by the function name.
Where the <func-name> is the name of the Lambda function you are looking for.
Additionally, you can use the --tail flag to stream the logs automatically to your console.
This can be very helpful during development when trying to debug your functions using the
console.log call.
Hopefully, this has helped you set up CloudWatch logging for your API Gateway and Lambda
projects. And given you a quick idea of how to read your serverless logs using the AWS Console.
When a request is made to your serverless API, it starts by hitting API Gateway and makes its
way through to Lambda and invokes your function. It takes quite a few hops along the way and
each hop can be a point of failure. And since we don’t have great visibility over each of the
specific hops, pinpointing the issue can be a bit tricky. We are going to take a look at the
following issues:
This chapter assumes you have turned on CloudWatch logging for API Gateway and that you
know how to read both the API Gateway and Lambda logs. If you have not done so, start by
taking a look at the chapter on API Gateway and Lambda Logs (/chapters/api-gateway-and-
lambda-logs.html).
https://API_ID.execute-api.REGION.amazonaws.com/STAGE/PATH
In all of these cases, the error does not get logged to CloudWatch since the request does not hit
your API Gateway project.
This is a tricky issue to debug because the request still has not reached API Gateway, and hence
the error is not logged in the API Gateway CloudWatch logs. But we can perform a check to
ensure that our Cognito Identity Pool users have the required permissions, using the IAM
policy Simulator (https://policysim.aws.amazon.com).
Before we can use the simulator we first need to find out the name of the IAM role that we are
using to connect to API Gateway. We had created this role back in the Create a Cognito identity
pool (/chapters/create-a-cognito-identity-pool.html) chapter.
Select API Gateway as the service and select the Invoke action.
Expand the service and enter the API Gateway endpoint ARN, then select Run Simulation. The
format here is the same one we used back in the Create a Cognito identity pool
(/chapters/create-a-cognito-identity-pool.html) chapter; arn:aws:execute-
api:YOUR_API_GATEWAY_REGION:*:YOUR_API_GATEWAY_ID/* . In our case this looks like
arn:aws:execute-api:us-east-1:*:ly55wbovq4/* .
If your IAM role is configured properly you should see allowed under Permission.
But if something is off, you’ll see denied.
To fix this and edit the role we need to go back to the AWS Console
(https://console.aws.amazon.com) and select IAM from the list of services.
Select Roles on the left menu.
And select the IAM role that our Identity Pool is using. In our case it’s called
Cognito_notesidentitypoolAuth_Role .
...
{
"Effect": "Allow",
"Action": [
"execute-api:Invoke"
],
"Resource": [
"arn:aws:execute-
api:YOUR_API_GATEWAY_REGION:*:YOUR_API_GATEWAY_ID/*"
]
}
...
Now if you test your policy, it should show that you are allowed to invoke your API Gateway
endpoint.
Lambda Function Error
Now if you are able to invoke your Lambda function but it fails to execute properly due to
uncaught exceptions, it’ll error out. These are pretty straightforward to debug. When this
happens, AWS Lambda will attempt to convert the error object to a string, and then send it to
CloudWatch along with the stacktrace. This can be observed in both Lambda and API Gateway
CloudWatch log groups.
To get around this issue, you can set this callbackWaitsForEmptyEventLoop property to false
to request AWS Lambda to freeze the process as soon as the callback is called, even if there are
events in the event loop.
context.callbackWaitsForEmptyEventLoop = false;
...
};
This effectively allows a Lambda function to return its result to the caller without requiring that
the database connection be closed. This allows the Lambda function to reuse the same
connection across calls, and it reduces the execution time as well.
These are just a few of the common issues we see folks running into while working with
serverless APIs. Feel free to let us know via the comments if there are any other issues you’d
like us to cover.
For help and discussion
service: service-name
provider:
name: aws
stage: dev
functions:
hello:
handler: handler.hello
environment:
SYSTEM_URL: http://example.com/api/v1
Here SYSTEM_URL is the name of the environment variable we are defining and
http://example.com/api/v1 is its value. We can access this in our hello Lambda
function using process.env.SYSTEM_URL , like so:
service: service-name
provider:
name: aws
stage: dev
environment:
SYSTEM_ID: jdoe
functions:
hello:
handler: handler.hello
environment:
SYSTEM_URL: http://example.com/api/v1
Just as before we can access the environment variable SYSTEM_ID in our hello Lambda
function using process.env.SYSTEM_ID . The difference being that it is available to all the
Lambda functions defined in our serverless.yml .
In the case where both the provider and functions section has an environment variable
with the same name, the function specific environment variable takes precedence. As in, we can
override the environment variables described in the provider section with the ones defined
in the functions section.
Let’s take a quick look at how these work using an example. Say you had the following
serverless.yml .
service: service-name
provider:
name: aws
stage: dev
functions:
helloA:
handler: handler.helloA
environment:
SYSTEM_URL: http://example.com/api/v1/pathA
helloB:
handler: handler.helloB
environment:
SYSTEM_URL: http://example.com/api/v1/pathB
In the case above we have the environment variable SYSTEM_URL defined in both the
helloA and helloB Lambda functions. But the only difference between them is that the url
ends with pathA or pathB . We can merge these two using the idea of variables.
A variable allows you to replace values in your serverless.yml dynamically. It uses the
${variableName} syntax, where the value of variableName will be inserted.
Let’s see how this works in practice. We can rewrite our example and simplify it by doing the
following:
service: service-name
custom:
systemUrl: http://example.com/api/v1/
provider:
name: aws
stage: dev
functions:
helloA:
handler: handler.helloA
environment:
SYSTEM_URL: ${self:custom.systemUrl}pathA
helloB:
handler: handler.helloB
environment:
SYSTEM_URL: ${self:custom.systemUrl}pathB
custom:
systemUrl: http://example.com/api/v1/
This defines a variable called systemUrl under the section custom . We can then reference
the variable using the syntax ${self:custom.systemUrl} .
Variables can be referenced from a lot of different sources including CLI options, external
YAML files, etc. You can read more about using variables in your serverless.yml here
(https://serverless.com/framework/docs/providers/aws/guide/variables/).
In this chapter we will take a look at how to configure stages in Serverless. Let’s first start by
looking at how stages can be implemented.
You can create multiple stages within a single API Gateway project. Stages within the same
project share the same endpoint host, but have a different path. For example, say you have
a stage called prod with the endpoint:
https://abc12345.execute-api.us-east-1.amazonaws.com/prod
If you were to add a stage called dev to the same API Gateway API, the new stage will
have the endpoint:
https://abc12345.execute-api.us-east-1.amazonaws.com/dev
The downside is that both stages are part of the same project. You don’t have the same
level of flexibility to fine tune the IAM policies for stages of the same API, when compared
to tuning different APIs. This leads to the next setup, each stage being its own API.
You create an API Gateway project for each stage. Let’s take the same example, your
prod stage has the endpoint:
https://abc12345.execute-api.us-east-1.amazonaws.com/prod
To create the dev stage, you create a new API Gateway project and add the dev stage
to the new project. The new endpoint will look something like:
https://xyz67890.execute-api.us-east-1.amazonaws.com/dev
Note that the dev stage carries a different endpoint host since it belongs to a different
project. This is the approach Serverless Framework takes when configuring stages for your
Serverless project. We will look at this in detail below.
Just like how having each stage being separate APIs give us more flexibility to fine tune the
IAM policy. We can take it a step further and create the API project in a different AWS
account. Most companies don’t keep their production infrastructure in the same account as
their development infrastructure. This helps reduce any cases where developers
accidentally edit/delete production resources. We go in to more detail on how to deploy to
multiple AWS accounts using different AWS profiles in the Configure Multiple AWS
Profiles (/chapters/configure-multiple-aws-profiles.html) chapter.
Deploying to a Stage
Let’s look at how the Serverless Framework helps us work with stages. As mentioned above, a
new stage is a new API Gateway project. To deploy to a specific stage, you can either specify the
stage in the serverless.yml .
service: service-name
provider:
name: aws
stage: dev
Or you can specify the stage by passing the --stage option to the serverless deploy
command.
service: service-name
custom:
myStage: ${opt:stage, self:provider.stage}
myEnvironment:
MESSAGE:
prod: "This is production environment"
dev: "This is development environment"
provider:
name: aws
stage: dev
environment:
MESSAGE:
${self:custom.myEnvironment.MESSAGE.${self:custom.myStage}}
There are a couple of things happening here. We first defined the custom.myStage variable
as ${opt:stage, self:provider.stage} . This is telling Serverless Framework to use the
--stage CLI option if it exists. And if it does not, then use the default stage specified by
provider.stage . We also define the custom.myEnvironment section. This contains the
value for MESSAGE defined for each stage. Finally, we set the environment variable MESSAGE
as ${self:custom.myEnvironment.MESSAGE.${self.custom.myStage}} . This sets the
variable to pick the value of self.custom.myEnvironment depending on the current stage
defined in custom.myStage .
You can easily extend this format to create separate sets of environment variables for the
stages you are deploying to.
And we can access the MESSAGE in our Lambda functions via process.env object like so.
Hopefully, this chapter gives you a quick idea on how to set up stages in your Serverless project.
These credentials are stored in ~/.aws/credentials and are used by the Serverless
Framework when we run serverless deploy . Behind the scenes Serverless uses these
credentials and the AWS SDK to create the necessary resources on your behalf to the AWS
account specified in the credentials.
There are cases where you might have multiple credentials configured in your AWS CLI. This
usually happens if you are working on multiple projects or if you want to separate the different
stages of the same project.
In this chapter let’s take a look at how you can work with multiple AWS credentials.
Where newAccount is the name of the new profile you are creating. You can leave the
Default region name and Default output format the way they are.
In this case your Lambda function is run locally and has not been deployed yet. So any calls
made in your Lambda function to any other AWS resources on your account will use the default
AWS profile that you have. You can check your default AWS profile in ~/.aws/credentials
under the [default] tag.
To switch the default AWS profile to a new profile for the serverless invoke local
command, you can run the following:
Here newAccount is the name of the profile you want to switch to and hello is the name of
the function that is being invoked locally. By adding AWS_PROFILE=newAccount at the
beginning of our serverless invoke local command we are setting the variable that the
AWS SDK will use to figure out what your default AWS profile is.
If you want to set this so that you don’t add it to each of your commands, you can use the
following command:
$ export AWS_PROFILE=newAccount
Where newAccount is the profile you want to switch to. Now for the rest of your shell
session, newAccount will be your default profile.
You can read more about this in the AWS Docs here
(http://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html).
Again, newAccount is the AWS profile Serverless Framework will be using to deploy.
If you don’t want to set the profile every time you run serverless deploy , you can add it to
your serverless.yml .
service: service-name
provider:
name: aws
stage: dev
profile: newAccount
Note the profile: newAccount line here. This is telling Serverless to use the
newAccount profile while running serverless deploy .
Let’s look at a quick example of how to work with multiple profiles per stage. So following the
examples from before, if you wanted to deploy to your production environment, you would:
Here, prodAccount and devAccount are the AWS profiles for the production and staging
environment respectively.
To simplify this process you can add the profiles to your serverless.yml . So you don’t have
to specify them in your serverless deploy commands.
service: service-name
custom:
myStage: ${opt:stage, self:provider.stage}
myProfile:
prod: prodAccount
dev: devAccount
provider:
name: aws
stage: dev
profile: ${self:custom.myProfile.${self:custom.myStage}}
We used the concept of variables in Serverless Framework in this example. You can read more
about this in the chapter on Serverless Environment Variables (/chapters/serverless-
environment-variables.html).
Now, when you deploy to production, Serverless Framework is going to use the prodAccount
profile. And the resources will be provisioned inside prodAccount profile user’s AWS
account.
And when you deploy to staging, the exact same set of AWS resources will be provisioned
inside devAccount profile user’s AWS account.
$ serverless deploy --stage dev
Notice that we did not have to set the --aws-profile option. And that’s it, this should give
you a good understanding of how to work with multiple AWS profiles and credentials.
In this chapter we will take a look at how to customize the IAM Policy that Serverless
Framework is going to use.
Granting AdministratorAccess policy ensures that your project will always have the necessary
permissions. But if you want to create an IAM policy that grants the minimal set of permissions,
you need to customize your IAM policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:*",
"s3:*",
"logs:*",
"iam:*",
"apigateway:*",
"lambda:*",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"events:*"
],
"Resource": [
"*"
]
}
]
}
We can attach this policy to the IAM user we are creating by continuing from the Attach
existing policies directly step in the Create an IAM User (% link _chapters/create-an-iam-
user.md %}) chapter.
Finally, hit Create Policy. You can now chose this policy while creating your IAM user instead of
the AdministratorAccess one that we had used before.
This policy grants your Serverless Framework project access to all the resources listed above.
But we can narrow this down further by restricting them to specific Actions for the specific
Resources in each AWS service.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:Describe*",
"cloudformation:List*",
"cloudformation:Get*",
"cloudformation:PreviewStackUpdate",
"cloudformation:CreateStack",
"cloudformation:UpdateStack",
"cloudformation:DeleteStack"
],
"Resource": "arn:aws:cloudformation:<region>:
<account_no>:stack/<service_name>*/*"
},
{
"Effect": "Allow",
"Action": [
"cloudformation:ValidateTemplate"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::*/*"
]
},
{
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": "arn:aws:logs:<region>:<account_no>:log-group::log-
stream:*"
},
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DeleteLogGroup",
"logs:DeleteLogStream",
"logs:DescribeLogStreams",
"logs:FilterLogEvents"
],
"Resource": "arn:aws:logs:<region>:<account_no>:log-
group:/aws/lambda/<service_name>*:log-stream:*",
"Effect": "Allow"
},
{
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:PassRole",
"iam:CreateRole",
"iam:DeleteRole",
"iam:DetachRolePolicy",
"iam:PutRolePolicy",
"iam:AttachRolePolicy",
"iam:DeleteRolePolicy"
],
"Resource": [
"arn:aws:iam::<account_no>:role/<service_name>*-lambdaRole"
]
},
{
"Effect": "Allow",
"Action": [
"apigateway:GET",
"apigateway:POST",
"apigateway:PUT",
"apigateway:DELETE"
],
"Resource": [
"arn:aws:apigateway:<region>::/restapis"
]
},
{
"Effect": "Allow",
"Action": [
"apigateway:GET",
"apigateway:POST",
"apigateway:PUT",
"apigateway:DELETE"
],
"Resource": [
"arn:aws:apigateway:<region>::/restapis/*"
]
},
{
"Effect": "Allow",
"Action": [
"lambda:GetFunction",
"lambda:CreateFunction",
"lambda:DeleteFunction",
"lambda:UpdateFunctionConfiguration",
"lambda:UpdateFunctionCode",
"lambda:ListVersionsByFunction",
"lambda:PublishVersion",
"lambda:CreateAlias",
"lambda:DeleteAlias",
"lambda:UpdateAlias",
"lambda:GetFunctionConfiguration",
"lambda:AddPermission",
"lambda:RemovePermission",
"lambda:InvokeFunction"
],
"Resource": [
"arn:aws:lambda:*:<account_no>:function:<service_name>*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"events:Put*",
"events:Remove*",
"events:Delete*",
"events:Describe*"
],
"Resource": "arn:aws:events::<account_no>:rule/<service_name>*"
}
]
}
The <account_no> is your AWS Account ID and you can follow these instructions
(http://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html) to look it up.
Also, recall that the <region> and <service_name> are defined in your
serverless.yml like so.
service: my-service
provider:
name: aws
region: us-east-1
The above IAM policy template restricts access to the AWS services based on the name of your
Serverless project and the region it is deployed in.
It provides sufficient permissions for a minimal Serverless project. However, if you provision
any additional resources in your serverless.yml, or install Serverless plugins, or invoke any AWS
APIs in your application code; you would need to update the IAM policy to accommodate for
those changes. If you are looking for details on where this policy comes from; here is an in-
depth discussion on the minimal Serverless IAM Deployment Policy
(https://github.com/serverless/serverless/issues/1439) required for a Serverless project.
Code Splitting
While working on React.js single page apps, there is a tendency for apps to grow quite large. A
section of the app (or route) might import a large number of components that are not necessary
when it first loads. This hurts the initial load time of our app.
You might have noticed that Create React App will generate one large .js file while we are
building our app. This contains all the JavaScript our app needs. But if a user is simply loading
the login page to sign in; it doesn’t make sense that we load the rest of the app with it. This isn’t
a concern early on when our app is quite small but it becomes an issue down the road. To
address this, Create React App has a very simple built-in way to split up our code. This feature
unsurprisingly, is called Code Splitting.
Create React App (from 1.0 onwards) allows us to dynamically import parts of our app using the
import() proposal. You can read more about it here
(https://facebook.github.io/react/blog/2017/05/18/whats-new-in-create-react-
app.html#code-splitting-with-dynamic-import).
While, the dynamic import() can be used for any component in our React app; it works
really well with React Router. Since, React Router is figuring out which component to load
based on the path; it would make sense that we dynamically import those components only
when we navigate to them.
We start by importing the components that will respond to our routes. And then use them to
define our routes. The Switch component renders the route that matches the path.
However, we import all of the components in the route statically at the top. This means, that all
these components are loaded regardless of which route is matched. To implement Code
Splitting here we are going to want to only load the component that responds to the matched
route.
this.state = {
component: null
};
}
async componentDidMount() {
const { default: component } = await importComponent();
this.setState({
component: component
});
}
render() {
const C = this.state.component;
return AsyncComponent;
}
We are going to use the asyncComponent to dynamically import the component we want.
const AsyncHome = asyncComponent(() => import("./containers/Home"));
It’s important to note that we are not doing an import here. We are only passing in a function to
asyncComponent that will dynamically import() when the AsyncHome component is
created.
Also, it might seem weird that we are passing a function here. Why not just pass in a string (say
./containers/Home ) and then do the dynamic import() inside the AsyncComponent ?
This is because we want to explicitly state the component we are dynamically importing.
Webpack splits our app based on this. It looks at these imports and generates the required
parts (or chunks). This was pointed out by @wSokra
(https://twitter.com/wSokra/status/866703557323632640) and @dan_abramov
(https://twitter.com/dan_abramov/status/866646657437491201).
We are then going to use the AsyncHome component in our routes. React Router will create
the AsyncHome component when the route is matched and that will in turn dynamically
import the Home component and continue just like before.
Now let’s go back to our Notes project and apply these changes.
It is pretty cool that with just a couple of changes, our app is all set up for code splitting. And
without adding a whole lot more complexity either! Here is what our src/Routes.js looked
like before.
Notice that instead of doing the static imports for all the containers at the top, we are creating
these functions that are going to do the dynamic imports for us when necessary.
Now if you build your app using npm run build ; you’ll see the code splitting in action.
Each of those .chunk.js files are the different dynamic import() calls that we have. Of
course, our app is quite small and the various parts that are split up are not significant at all.
However, if the page that we use to edit our note included a rich text editor; you can imagine
how that would grow in size. And it would unfortunately affect the initial load time of our app.
Now if we deploy our app using npm run deploy ; you can see the browser load the different
chunks on-demand as we browse around in the demo (https://demo.serverless-stack.com).
That’s it! With just a few simple changes our app is completely set up to use the code splitting
feature that Create React App has.
Next Steps
Now this seems really easy to implement but you might be wondering what happens if the
request to import the new component takes too long, or fails. Or maybe you want to preload
certain components. For example, a user is on your login page about to login and you want to
preload the homepage.
It was mentioned above that you can add a loading spinner while the import is in progress. But
we can take it a step further and address some of these edge cases. There is an excellent higher
order component that does a lot of this well; it’s called react-loadable
(https://github.com/thejameskyle/react-loadable).
And AsyncHome is used exactly as before. Here the MyLoadingComponent would look
something like this.
It’s a simple component that handles all the different edge cases gracefully.
To add preloading and to further customize this; make sure to check out the other options and
features that react-loadable (https://github.com/thejameskyle/react-loadable) has. And have
fun code splitting!
Aside from isolating the resources used, having a separate environment that mimics your
production version can really help with testing your changes before they go live. You can take
this idea of environments further by having a staging environment that can even have
snapshots of the live database to give you as close to a production setup as possible. This type
of setup can sometimes help track down bugs and issues that you might run into only on our live
environment and not on local.
In this chapter we will look at some simple ways to configure multiple environments in our
React app. There are many different ways to do this but here is a simple one based on what we
have built so far in this guide.
Here REACT_APP_TEST_VAR is the custom environment variable and we are setting it to the
value 123 . In our app we can access this variable as process.env.REACT_APP_TEST_VAR .
So the following line in our app:
console.log(process.env.REACT_APP_TEST_VAR);
Will print out 123 in our console.
Note that, these variables are embedded during build time. Also, only the variables that start
with REACT_APP_ are embedded in our app. All the other environment variables are ignored.
Configuring Environments
We can use this idea of custom environment variables to configure our React app for specific
environments. Say we used a custom environment variable called REACT_APP_STAGE to
denote the environment our app is in. And we wanted to configure two environments for our
app:
One that we will use for our local development and also to test before pushing it to live.
Let’s call this one dev .
And our live environment that we will only push to, once we are comfortable with our
changes. Let’s call it production .
The first thing we can do is to configure our build system with the REACT_APP_STAGE
environment variable. Currently the scripts portion of our package.json looks
something like this:
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test --env=jsdom",
"predeploy": "npm run build",
"deploy": "aws s3 sync build/ s3://YOUR_S3_DEPLOY_BUCKET_NAME",
"postdeploy": "aws cloudfront create-invalidation --distribution-id
YOUR_CF_DISTRIBUTION_ID --paths '/*' && aws cloudfront create-
invalidation --distribution-id YOUR_WWW_CF_DISTRIBUTION_ID --paths
'/*'",
"eject": "react-scripts eject"
}
Here we only have one environment and we use it for our local development and on live. The
npm start command runs our local server and npm run deploy command deploys our
app to live.
"scripts": {
"start": "REACT_APP_STAGE=dev react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test --env=jsdom",
Note that you don’t have to replicate the S3 and CloudFront Distributions for the dev version.
But it does help if you want to mimic the live version as much as possible.
export default {
MAX_ATTACHMENT_SIZE: 5000000,
s3: {
BUCKET: "YOUR_S3_UPLOADS_BUCKET_NAME"
},
apiGateway: {
REGION: "YOUR_API_GATEWAY_REGION",
URL: "YOUR_API_GATEWAY_URL"
},
cognito: {
REGION: "YOUR_COGNITO_REGION",
USER_POOL_ID: "YOUR_COGNITO_USER_POOL_ID",
APP_CLIENT_ID: "YOUR_COGNITO_APP_CLIENT_ID",
IDENTITY_POOL_ID: "YOUR_IDENTITY_POOL_ID"
}
};
To use the REACT_APP_STAGE variable, we are just going to set the config conditionally.
const dev = {
s3: {
BUCKET: "YOUR_DEV_S3_UPLOADS_BUCKET_NAME"
},
apiGateway: {
REGION: "YOUR_DEV_API_GATEWAY_REGION",
URL: "YOUR_DEV_API_GATEWAY_URL"
},
cognito: {
REGION: "YOUR_DEV_COGNITO_REGION",
USER_POOL_ID: "YOUR_DEV_COGNITO_USER_POOL_ID",
APP_CLIENT_ID: "YOUR_DEV_COGNITO_APP_CLIENT_ID",
IDENTITY_POOL_ID: "YOUR_DEV_IDENTITY_POOL_ID"
}
};
const prod = {
s3: {
BUCKET: "YOUR_PROD_S3_UPLOADS_BUCKET_NAME"
},
apiGateway: {
REGION: "YOUR_PROD_API_GATEWAY_REGION",
URL: "YOUR_PROD_API_GATEWAY_URL"
},
cognito: {
REGION: "YOUR_PROD_COGNITO_REGION",
USER_POOL_ID: "YOUR_PROD_COGNITO_USER_POOL_ID",
APP_CLIENT_ID: "YOUR_PROD_COGNITO_APP_CLIENT_ID",
IDENTITY_POOL_ID: "YOUR_PROD_IDENTITY_POOL_ID"
}
};
This is pretty straightforward. We simply have a set of configs for dev and for production. The
configs point to a separate set of resources for our dev and production environments. And
using process.env.REACT_APP_STAGE we decide which one to use.
Again, it might not be necessary to replicate the resources for each of the environments. But it
is pretty important to separate your live resources from your dev ones. You do not want to be
testing your changes directly on your live database.
So to recap:
This entire setup is fairly straightforward and can be extended to multiple environments. You
can read more on custom environment variables in Create React App here
(https://github.com/facebookincubator/create-react-app/blob/master/packages/react-
scripts/template/README.md#adding-custom-environment-variables).
Demo
A demo version of this service is hosted on AWS - https://cvps1pt354.execute-api.us-
east-1.amazonaws.com/dev/hello (https://cvps1pt354.execute-api.us-east-
1.amazonaws.com/dev/hello).
callback(null, response);
};
Requirements
Configure your AWS CLI (/chapters/configure-the-aws-cli.html)
Install the Serverless Framework npm install serverless -g
Installation
To create a new Serverless project with ES7 support.
$ cd my-project
$ npm install
Usage
To run a function on your local
$ npm test
We use Jest to run our tests. You can read more about setting up your tests here
(https://facebook.github.io/jest/docs/en/getting-started.html#content).
$ serverless deploy
So give it a try and send us an email (mailto:[email protected]) if you have any questions or
open a new issue (https://github.com/AnomalyInnovations/serverless-nodejs-
starter/issues/new) if you’ve found a bug.