How we secure APIs at Nylas using JSON web tokens

How we secure APIs at Nylas using JSON web tokens

8 min read

Introduction

Security is a core focus at Nylas as we design our applications with security in mind. This post goes over how we leverage JWT (JSON Web Tokens) to securely communicate between our API services.

Why use JWT?

JWT (JSON Web Tokens) is gaining in popularity. JWT is similar to any other token as it is just a string of data. What sets JWT apart is that it is accompanied by a set of claims (more on claims shortly).

At Nylas we want to use JWT for our internal applications because we need to communicate between API services on different clusters and service providers such as AWS (Amazon Web Services) and GCP (Google Cloud Platform).

We have to make our API services publicly accessible to enable API communication. We need a solution that can be implemented quickly and is scalable to support long-term solutions such as service mesh.

JWT is a fast, reliable and scalable solution that we can use to secure our applications while it’s accessible on the public network.

How to implement JWT verification

This is where we leverage the built in claims that come with the JWT library. Claims are a set of data containing information about the sender and nature of the request. The claims data is encrypted in the JWT string itself. Claims data includes:

  • iss (issuer): issuer of the JWT, i.e. <service A>
  • aud (audience): recipient for which the JWT is intended, i.e. <service B>
  • exp (expiration time): time after which the JWT expires
  • iat (issued at time): time at which the JWT was issued; can be used to determine age of the JWT
  • erb (encoded request body): request body that to be verified by service B upon deciphering the JWT token
  • rv (rsa version): the uuid associated with the RSA key

Authentication flow with JWT

This is the authentication flow for API communication with JWT:

  1. Service A uses the claims and a private key from environment variable to sign the request using a specified algorithm, in our case we are using RS256, and generates a signed token.
  2. We add the signed token to the Authorization header with a bearer token, Bearer <token>. Now the signed request is ready to be sent to service B.
  3. Service B is responsible for verifying the JWT token. Service B ensures the signature is authentic by deciphering the JWT using a public key taken from the environment variables.

The JWT signature is generated based on a base64-encoded request body. Even if someone managed to intercept a JWT token, they won’t be able to use it for a modified payload.

Building a shared library to verify JWT tokens

We created a shared library to make signing and verifying JWT tokens easier. Let’s take a look at the JWT Token format. JWT is a three-part string comprised of a header, payload and signature. The format of a JWT is header.payload.signature:

  • The header is:
{
  "alg": "RSA",
  "typ": "JWT"
}
  • The payload looks something like this:
{
	"iss": "<name-of-the-caller-service>"
	"exp": 12345,
	"aud": "<name-of-the-callee-service>",
        "iat": 67890,
	"rv": "<uuid-as-rsa-key-pair-version>"
        "erb": "<encoded-request-body>"
}
  • And then we can generate the signature using a private key:
newJwt := jwt.NewWithClaims(jwt.SigningMethodRS256, payload)
signedToken := newJwt.SignedString(privateKey)

Once a request is received by service B, service B uses a public key from the environment variables to decrypt and verify the JWT token. This would look something like this:

if strings.Contains(authHeader, "Bearer") {
		token := strings.Split(authHeader, " ")[1]

		err := jwt.Verify(token, "audience_name", c.Body())

		if err != nil {
			return c.Status(400).JSON(NewResponseError(BadRequest, err.Error()))
		}
	}

In the example above we are using http version 1.1 so we use an authorization header with the bearer token, then perform string split to isolate the token. In the case of remote procedure calls (i.e gRPC), we pass the JWT using metadata.

One-to-many service-client relationship

In our design we have many services calling a single service for data, and we need to secure every request. In order to do this, we add public keys to the callee environment variables and name it with an UUID (universally unique identifier), the callee can then look up the UUID of the private key and find the associated public key.

How to store public keys

The keys are saved in yaml files as environment variables and then encrypted as SOPS files. This way we can store the keys in our github repository as an encrypted file.

We created a yaml file that looks like this:

secret:
  - name: RSA_PUB_KEY_12341234
    value: |-
      -----BEGIN PUBLIC KEY-----
      MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDGyqyxQEC67z6ZAW075ihmbI8m
      JH/fdoHCztr6HqeNyZDj6jCpmt6z29WCNUUUuukv3BkXSjP4j34apFzXd7VsII/M
      io8y/HTexG8+fd+0k1xy8kFoFkrMhd9m9vJtaGShffjO93zxTLiWnUrJcLDrh1j9
      EL1bB92vXBQG7WxFBQIDAQAB
      -----END PUBLIC KEY-----

Next we call sops –encrypt –<key location> <yaml file location> > secrets.yaml to generate a SOPS encrypted secret file:

secret:
    - name: ENC[AES256_GCM,data:mX8EfslZuUjHBGgMd3IczsFAG1Q=,iv:LC3AEAX4nrF2fFUAYaUDQaxZ1qf20Q+iliV+zVNUsnY=,tag:nTt2HRfj3ed0jLUh7KB5JA==,type:str]
      value: ENC[AES256_GCM,data:in1Dr3jBZ01R55vsQHieRhuabaa3IAwIA1m21WLf2RSY213iW1KQPN65dVctp09cpnX147n4gJNDU7Ds2c0GcTmY0xqd/FUsoX3SBZ+XfPlDz0UP77EPci8/jwpyYhJOVJh1bXTbNrFP2OELI8Nmm2Dw0LK1OlX2/neWQOUPJukhLhKWFbWX69pcVp3z9c5wiQMjDCZ7OHozoo44xrHU9zs+7vOeHUdh9Av2xaZ5xP3F7neZ3eFB36ZFUu7ADV62eulrxLJJTp5v04BOPbrK0rNpjWsi7T2fT2p0gHX+OgulB+WuVJcKeqTe8yPljN5XPYKqMybvXR0NP0B1MREjw+8njsQWZOD2nY73fXdfBA==,iv:ouMHWjbkmTnoTedu/RqpTHCHMa0h8gFN082TvzkfTqk=,tag:af5zrXIOoa9YVYsIJ2/dxw==,type:str]
sops:
    kms: []
    gcp_kms:
        - resource_id: key location
          created_at: "1234"
          enc: somebase64data
    azure_kv: []
    hc_vault: []
    age: []
    lastmodified: "12345"
    mac: ENC[AES256_GCM,data:uAmJA87dJLlqoxPZMdyEgJB2xNBcTbXklL08vH55BaOp03y5ip6yiES0xXJYMToSr6UHuqxtQA16AcHRYcrCKlrHHibdkrkVBEM1sCaZ7ENLLKArJODYXqkL0h7SSqwgvHG8CwEn919tFtc3nkaPju94kubxo899Jt9kEH2ScgQ=,iv:Y9YDFqGq72/wtvv2o/B59/OzANbOF184aOjWs2hiWVs=,tag:fwL5De4tKfrQSNMGa74cMg==,type:str]
    pgp: []
    unencrypted_suffix: _unencrypted
    version: 1234

During deploy we run a custom script to decrypt the SOPS file and load the keys into the pod environment:

/root/go/bin/sops -d ${variables.helm_sops_file} > secrets.decrypted.yaml 

The script uses SOPs to decrypt the encrypted yaml file and stores it to use environment variables during the helm install process. During helm install, the regular deployment file works as long as we are pointing out the values we are looking for.

In our case we set the environment variables where we have the common environment and we also load the secrets from the decrypted SOPS file:

env:
	{{- range .Values.envs }}
	- name: {{ .name }}
	value: {{ .value | quote }}
	{{- end }}
	{{- range .Values.secret }}
	- name: {{ .name }}
	value: {{ .value | quote }}
	{{- end }}

Our deployment steps:

  1. Decrypt SOPS secret (public/private keys)
  2. Helm install using decrypted secrets
  3. Generate new pods with the secrets as environment variables

Key Rotations

We rotate our keys every 90 days by changing the keys for both service A and service B. In order to do this we:

1. Generate RSA public key and private key pairs

2. Add new public key to service B:

3. Replace private key in service A:

4. Remove old public key from service B:

Looking ahead…

Our long term solution is to incorporate a service mesh such as Istio to enable cross cluster API communication:

A service mesh works by adding a proxy sidecar container onto each of the pods, which is responsible for managing network traffic. Since the proxies are managed in a separate container, we can add more complex logic and authentication policies such as mTLS.

Using a service mesh is more secure because it uses mTLS (mutual TLS) to secure network traffic. Using mTLS provides service-to-service authentication using two-way cryptographic verification using digital certificates. If we use a service mesh, we don’t need to publicly expose our services, because a service mesh such as Istio only needs access to the kubernetes API.

The longer term solution is to use a service mesh such as Istio, however, we can still continue to use JWT token verification. JWT tokens can be added as an extra layer of security to our microservices stack.

Conclusion

JWT is a powerful tool that helps us secure our APIs. This is by no means a bulletproof method as there are many intelligent ways one can exploit this. For example if our private keys are leaked, someone

can use the private key to send requests to our API. Using JWTs is adding another layer of security to our APIs.

There are many resources online that can help you understand JWT more. A website that I use frequently is https://jwt.io/ where you can experiment with your JWT tokens or create new tokens using different algorithms.

Special thanks to everyone who helped us with this project:

  • Zhi Qu
  • Yermek Sakiyev
  • Chitresh Deshpande
  • Wei Li
  • Mudit Seth
  • Austin Gregory
  • Caleb Geene
  • Brian Tian
  • Josh Roepke

Thank you for reading!

Related resources

How to customize the Nylas Scheduler workflow with custom events

We’re excited to announce the launch of the Nylas Scheduler v3! Efficient scheduling is essential…

How to block time slots in Outlook and Google calendar with Nylas Calendar API

Key Takeaways Managing calendar availability is essential for professionals, teams, and businesses to stay organized…

How to Solve Webhook Integration Challenges with PubSub Notification Channel

Key Takeaways This article addresses the challenges of webhook integration and introduces the PubSub Notification…