I am a cloud architect for my day job. I don’t code as much as I did when I was a developer. Instead, I spend my time thinking about the future. I make 1, 3, and 5 year plans for how we progress in the cloud efficiently while continuing to use up-to-date practices and methodologies.
It’s a lot of theory. But it’s exciting.
From all this, one thing is abundantly clear: we have a great understanding of what is possible today with serverless.
As responsible architects, we need to think about what serverless might hold in the future. Where does it go from here? Has it peaked?
I don’t think it has. I think the future holds a completely different serverless world than we know today.
Let’s talk about some disruptors that are likely in our future.
In production-level serverless applications, monitoring your application is paramount to your success. You need to know if you’ve dropped any events, where the bottlenecks are, and if items are piling up in dead letter queues. Not to mention you need the ability to trace a transaction end to end.
This is an area that is finally beginning to take off. As more and more serverless production workloads are coming online, it is becoming increasingly obvious there’s a gap in this space.
In the future we need tools like what the vendors listed above offer, but with optimization and insights built-in like AWS Trusted Advisor. We need app monitoring to evolve. When we hear application monitoring, we need to assume more than service graphs and queue counts.
Application monitoring will become more than fancy dashboards and slack messages. It will eventually tell us we provisioned the wrong infrastructure from the workload it sees.
There is endless potential with monitoring. But what we need to strive for is normalizing infrastructure decisions based on workload. Contrary to what many developers think, their use case is not special. We are all solving the same 8 problems in different domains.
To get here, we need monitoring services to develop to a point where they understand the infrastructure. Not only that, but they also must be able to recognize the traffic patterns so it can make a recommendation on how to optimize for cost, performance, or sustainability (or all three).
We already know serverless applications are built by a combination of business logic and infrastructure as code (IaC). Business logic is the intellectual property that makes your application distinctly yours in the way it solves a business problem. Infrastructure as code is what defines the cloud vendor resources that run the business logic.
Tooling continues to get better to make infrastructure as code easier. Tools like AWS SAM and CDK abstract away some of the complexity of CloudFormation to make it easier to “connect the dots” with your resources. Tools like Terraform and Serverless Framework abstract away the vendor entirely, letting you deploy your workloads with the same IaC regardless of which cloud provider you use.
As we continue to grow in our understanding of technology, abstractions continue to get higher and higher. Which, in turn, makes development easier and easier.
We are already in the midst of a paradigm shift. One that takes us to an unprecedented level of abstraction.
The folks at Serverless Cloud are abstracting away infrastructure as code. Their mission is to infer the infrastructure you need from the business logic you write. Infrastructure from code is a phrase coined by Doug Moscrop and made popular by Jeremy Daly, that describes this process.
You write the code to satisfy your business problem and the rest is taken care of.
Infrastructure from code is the next major disruption to serverless.
Once Serverless Cloud has set the precedent for a range of use cases, others will follow suit. Innovators will get in the space and make even higher level abstractions. Well-defined best practices and patterns will be abstracted away into simple transforms based on your business logic.
By abstracting away infrastructure decisions, serverless goes from the wild west to a standard development practice reinforced by de facto patterns and standards that are implemented for us automatically.
By going all-in on infrastructure from code, we take away what is considered the “hard part” of serverless. The part developers spend so much time on trying to figure out what permission they are missing or why a trigger isn’t firing.
Serverless becomes even easier to go from idea to production in break-neck speeds.
After we get smart monitoring based on workloads and infrastructure generated for us by looking at our code, what’s next?
This is our jaw hits the floor moment.
Imagine a scenario where your development teams have written a serverless application. It gets initially deployed to the cloud with infrastructure inferred from the code.
The application makes it to production and runs for about a month. Over the course of that month, the infrastructure analytics has determined your traffic patterns and successfully mapped all the distributed transactions across your microservices.
The initially inferred resources were not appropriate for the amount of scale of our app. Monitoring caught the inefficiency and automatically reprovisioned the correct resources given the traffic and use case.
In other words, the infrastructure doesn’t just scale to meet traffic. It rearranges and reorganizes to optimize cost, performance, and sustainability based on real traffic and workload data.
Your application will be continuously fine-tuning its infrastructure. It might start with an integration from API Gateway to a Lambda Function to DynamoDB. But as it gains usage analytics, it might omit the Lambda function and directly integrate from API Gateway to DynamoDB automatically.
Revolutionizing both application monitoring and infrastructure from code is fundamental to making this a reality.
In a futuristic world of self-provisioning infrastructure, there are certain aspects of serverless development that won’t change. Data modeling and API modeling will still need to be done by hand.
Why, you ask?
When we talk about abstracting away complexities, they tend to be “domain-free” meaning you’re pulling out the repeatable cut and dry pieces. Once you start including domain specific data, nuance comes in and makes it difficult to genericize.
Your data model is driven off of your access patterns. Access patterns are completely domain driven. They are built based on how you solve the business problem your way. There is no generic abstraction you can make to automatically build a data model to satisfy the way you access your data.
It’s getting better, but we might not ever get to a point that is 100% production ready.
API modeling branches off a similar vein. If you are designing a REST API, you build your endpoints to drill into data entities. If we have to model data entities by hand, then it makes sense that it carries over to API modeling as well.
Designing APIs for developer experience requires a human touch. Sometimes what makes for a better DevEx goes against the fundamentals. Creating an endpoint that breaks REST guidelines but saves developers 5 steps is a tradeoff many are willing to take.
The future holds an exciting world for serverless. Abstractions will continue to get higher and higher and make our jobs continuously easier.
Applications could become “self-aware” in the sense that they know what the best infrastructure is given the use case and amount of traffic it sees. We’re already seeing progress with infrastructure from code, and it will continue to get better with time.
We have a long way to get there and of course, this is all theoretical. But not impossible.
The tools being developed today are truly revolutionary and integrate seamlessly into your cloud vendor environments. It’s not that far of a stretch to think some of these tools can perform usage analysis and modify the infrastructure.
We have many things to look forward to with serverless in the years to come. Keep learning, keep experimenting, keep innovating.