If you’ve worked with AWS in any sort of capacity, you’ve probably learned they have a unique way of doing things. They start off meetings reading documents in silence, they begin new projects by working backward, and no matter what they do they drive their leadership principles….hard.
There’s a reason Amazon basically runs the world. Their way of doing things work.
Among the many artifacts they produce to help companies build best-in-class software is their general design principles. If you’ve ever been through an AWS Well-Architected review, you know all about them in excruciating detail (in a good way).
If you’re into building serverless applications, AWS has an entirely different set of design principles you should follow. They take the core pillars of the Well-Architected model and look at them through a serverless application lens.
Today we’re going to take a look at all 7 principles and talk about how those translate to your designs as a solutions architect.
Functions are concise, short, single purpose and their environment may live up to their request lifecycle. Transactions are efficiently cost aware and thus faster executions are preferred.
What this means - Functions, in this case Lambdas, are supposed to be focused. They spin up, do a job, and spin down.
Do what you can to minimize execution time. Take advantage of asynchronous workflows where you can. Drop a task in an SQS queue for additional processing and stop execution of the originating Lambda. Try to avoid calling another Lambda from within a Lambda.
Functions should follow the single responsibility principle. If you need to do two separate actions, consider using two separate functions.
Serverless applications take advantage of the concurrency model, and tradeoffs at the design level are evaluated based on concurrency.
What this means - Don’t spend hours and hours trying to figure out how to optimize your processes into as few requests as possible. That’s not the point.
This principle is important because it tells solutions architects to rely on the ability of a serverless application to horizontally scale during peak times. If you follow design principle number one, your functions will be short and sweet. They will respond in a couple hundred milliseconds.
During peak times, your application will scale out. It might be serving 100,000 requests a minute, but to your end users it will feel like they are the only one using the system. It will be fast. The software will respond to a heavy load.
With this is mind, don’t worry too much about the number of requests you’re making. Design your systems to take advantage of AWS’s ability to automatically scale.
If you can break down a long running task into multiple pieces, do it. This design principle is about scaling horizontally over scaling vertically. By taking a big, heavy task and turning it into small, singular tasks, you result in more performant software that also costs you less money.
Function runtime environment and underlying infrastructure are short-lived, therefore local resources such as temporary storage is not guaranteed. State can be manipulated within a state machine execution lifecycle, and persistent storage is preferred for highly durable requirements.
What this means - Assume the function is stateless. Go to the database every time you need to do something with entity state. Don’t rely on globally-scoped variables.
A trick to optimize Lambda performance is to instantiate sdk clients globally (check out slide 19), but leave it at that. Since your lambdas spin up and down all day, you never know what will or won’t be there.
Start every execution like it’s brand new, and load what you need every time.
Underlying infrastructure may change. Leverage code or dependencies that are hardware-agnostic as CPU flags, for example, may not be available consistently.
What this means - You chose serverless for a reason: to not have to deal with hardware. Serverless functions are meant to be run at a higher level than CPU flags or other hardware commands.
When designing a system, forget about server-side hardware. Use environment variables to do configurations and power-tune your functions after you’ve implemented it.
Chaining Lambda executions within the code to orchestrate the workflow of your application results in a monolithic and tightly coupled application. Instead, use a state machine to orchestrate transactions and communication flows.
What this means - If you’re following principles 1 and 2, you should have designed a robust, modular system. A command might result in 4 or 5 Lambdas to complete. Remember, this is a good thing. You’ve enabled the system to scale horizontally and allowed higher throughput and performance in the process.
Don’t invoke Lambdas from within other Lambdas. It’s slower, considered an anti-pattern by AWS, and significantly more expensive. It is more expensive because you have to pay for execution time of the inner Lambda and the calling Lambda while it waits.
Step Functions are designed to resolve those issues. They give you an easy to digest state machine diagram of your workflow, and second-to-none traceability through executions.
If a Step Function seems a bit heavy for your system, you could try an express workflow, which is much lighter, has higher throughput/concurrency, and charged similarly to Lambda.
Events such as writing a new Amazon S3 object or an update to a database allow for transaction execution in response to business functionalities. This asynchronous event behavior is often consumer agnostic and drives just-in-time processing to ensure lean service design.
What this means - Asynchronous operations are your friends. Your customers don’t need to wait on everything that happens in the system. Embrace event-driven architectures.
This is how software engineers naturally think. If this happens, then this other thing should occur.
Serverless applications by nature are distributed systems. To get the pieces playing together, use basic events like when a new document finishes uploading or a DynamoDB stream that is triggered after a database write to perform follow-on activities.
Software doesn’t have to be synchronous anymore
Design your system with the intention to keep your users waiting as little as possible.
Operations triggered from requests/events must be idempotent as failures can occur and a given request/event can be delivered more than once. Include appropriate retries for downstream calls.
What this means - Anomalies happen. There will be times where a Lambda that was triggered off an event randomly fails. As a solutions architect, you must design the system in a way that can handle identical requests and have the same outcome performed on the system.
This is idempotency.
You should design idempotency and retries into (almost) every Lambda in your app. Step Functions provide an easy way to retry Lambdas with a configurable backoff rate in the event you have an issue.
There are only two hard problems in distributed systems: 2. Exactly-once delivery 1. Guaranteed order of messages 2. Exactly-once delivery— @firstname.lastname@example.org Mathias Verraes (@mathiasverraes) August 14, 2015
Since you can’t reliably trust if requests are going to come in the right order or that they will come in only once, designing systems with that in mind is critical.
AWS really knows their stuff. It is smart to keep these in your back pocket when designing a serverless solution.
We all want systems to be fast, reliable, and cost-effective. Following the AWS serverless design principles will make sure we do that.
So remember, keep if focused. Keep it small. Design for concurrent executions. Don’t worry about the number of calls you’re making (to a point). Orchestrate with managed services. Design with replay-ability in mind.
Serverless can be scary and a bit intimidating at times, but it takes practice. A good solutions architect makes a complex solution out of a complex problem. A great solutions architect makes a simple solution out of a complex problem.
Be great. Use the tools the way they were meant to be used.