AWS JS SDK v3 first impressions

(Read this article on the blog) I waited a few months to try out the new version of the AWS Javascript SDK to give the maintainers some time to iron out the initial issues. But since it is likely to be the de-facto way to write code that interfaces with AWS APIs, it’s finally time […]

(Read this article on the blog)

I waited a few months to try out the new version of the AWS Javascript SDK to give the maintainers some time to iron out the initial issues. But since it is
likely to be the de-facto way to write code that interfaces with AWS APIs, it’s finally time to start the migration.

This article is about my initial impressions and thoughts after using the new version for a small project. It has parts that are similar to Michael Wittig’s
experiences who has written about them not long ago.

Modularization

The first thing that catches the eye with the new SDK is the number of packages you can install. It’s the result of a major refactoring effort and it yields
multiple benefits.

But with multiple packages, the package.json will have a lot more dependencies:

{
  "dependencies": {
    "@aws-sdk/client-dynamodb": "^3.5.0",
  },
}

And imports will be a lot longer too:

import {DynamoDBClient, ListTablesCommand} from "@aws-sdk/client-dynamodb";

This is not necessarily a bad thing. I prefer the explicit over the implicit and this new packaging structure gives a lot of information about what is actually
used.

Which is one thing that tools such as bundlers can use to remove what is installed but not used. This is called tree shaking, and this modular structure fully
supports that. This allows, for example, Webpack to produce a bundle that is a lot smaller than what was possible with the v2. This is a big plus for
browser-based apps.

Why I like this modularization more is because it exposes a lot of internal utilities that were previously hidden in the SDK code. For example, there is an
official arn parser.

Also, it makes it easier to get just parts of the functionality so that you can change how it works. For example, the AWS signature algorithm is implemented in
a separate package. This possibly
makes it a lot easier to implement something that is not provided out-of-the-box but relies on this algorithm.

Separating clients and operations

The new SDK allows a new way to construct commands. You can import the client and the operations separately, making it explicit what operations a piece of code
uses.

import {DynamoDBClient, GetItemCommand} from "@aws-sdk/client-dynamodb";

const client = new DynamoDBClient({});

const res = await client.send(new GetItemCommand({
	TableName,
	Key,
}));

The non-modular approach still works:

import {DynamoDB} from "@aws-sdk/client-dynamodb";

const client = new DynamoDB({});

const res = await client.getItem({
	TableName,
	Key,
});

Separating the commands feels more of a functional approach to me, so for the time being I’ll prefer that. It might be easier to pass around functions that
create commands and it requires no change to how the client is called.

Middleware support

This is one of the more prominently advertised features of the new SDK, though I don’t think many people go and write new middlewares for the AWS clients.

Middlewares are customization functions that are run whenever a client object makes a call to an AWS API. It can modify the request/response headers, add
logging, caching, and all sorts of things. It opens the possibility for 3rd-parties to offer functionality that integrates into the core of the AWS SDK.

Why I like this new construct is that most of the extra functionality of the client libraries is implemented in middlewares, giving an easy way to see what is
happening under the hood. For example, the
middleware-retry is responsible for retrying
operations in case there is a failure. With a separate package, it’s easy to see which errors are
retried by default
.

Utils

Pagination

There is a new pagination util implemented as an async generator function, making my implementation
effectively obsolete. Probably the people who did the actual coding realized that each service paginates differently, so they made a paginator function for
each command that supports pagination
. For example, the CloudWatch Logs client supports 7 pagination
utils
.

Other than the repetition in the library code, these pagination utils make the code you write simpler. For example, a DynamoDB scan supports async for-iteration
out of the box:

import {DynamoDBClient, paginateScan} from "@aws-sdk/client-dynamodb";

for await (const page of paginateScan({
		client,
		pageSize: 25,
	}, {
		TableName,
		ProjectionExpression: "#pk",
		ExpressionAttributeNames: {"#pk": "ID"},
	})) {
	// use page
}

DynamoDB marshaller

The DocumentClient is gone from the new version. It converted between DynamoDB’s internal attribute format and native Javascript types. The net effect was that
you can use items with strings, numbers, arrays, and other familiar types instead of the {S: "string"}, {N: 5}, and other structures.

But the functionality is not gone, it’s just moved to a separate utility
function
.

There are a marshall and an unmarshall function for this conversion. The former converts from Javascript to DynamoDB, while the former does the
reverse. As a rule of thumb, when you send data to DynamoDB you’ll use the first, and when you get data from it you’ll use the second.

import {DynamoDBClient, GetItemCommand} from "@aws-sdk/client-dynamodb";
import { marshall, unmarshall } from "@aws-sdk/util-dynamodb";

const res = await client.send(new GetItemCommand({
	TableName,
	// convert from Javascript object to DynamoDB format
	Key: marshall({
		type: "users",
	}),
}));

// convert DynamoDB to Javascript object
return unmarshall(res.Item).count;

I like this approach more than the DocumentClient. It does not replicate functionality to a separate client library, and it also makes this conversion explicit.

Misc changes

A nice touch is first-class Promise support. No need to add .promise() to every call.

Also, the global AWS.config is no longer supported to configure all services in a central place. Again, I welcome this change as that more often than
not resulted in poor usage.

The new SDK brings first-class TypeScript support. In practice, it’s a lot less groundbreaking than it sounds as there were types for the client libraries already.
But it’s nice that they are guaranteed to be up-to-date.

Missing parts

One of my biggest concerns is that the SDK is no longer installed in the Lambda Node.js runtimes that use only inline code. With the v2 SDK, it is extremely easy
to write a short Lambda function that interacts with AWS services. While it’s better to install the aws-sdk package once your function gets a
package.json, short functions that have no other dependencies could use the preinstalled one. I’m hoping there will be an official Lambda layer that adds
support for this.

As the release notes noted, automatically assuming a role based on a profile does not work. Unfortunately, it seems like this won’t be supported and it requires
some code to work around.

Another useful but missing thing (as of the time of writing) is the lack of the chainable temporary credentials provider. It’s useful to assume a role through
another role, and that makes it easy to scope permissions.

Conclusion

The new SDK brings significant improvements to the structure and the observability of how it interacts with the AWS APIs. It requires some significant changes
to the projects that use it, but most of the changes make the code easier to understand and maintain.

Source: Advanced Web Machinery