- Serverless Provides Benefits Far Beyond the Ease of Management
- 1. It encourages components that do ONE thing
- 2. It enforces self-contained execution environments
- 3. It encourages more frequent deployments
- 4. It encourages the least-privilege security principle
- 5. It allows you to achieve high availability and fault tolerance easily
- 6. It enforces Infrastructure as Code
- 7. It encourages using existing battle-tested components
- Things that are harder to accomplish with serverless
Serverless Provides Benefits Far Beyond the Ease of Management
We often hear about best practices, but everything boils down to a specific use case and problem. The modern DevOps culture introduced several useful paradigms including building infrastructure in a declarative and repeatable way, leveraging automation to facilitate seamless IT operations, and developing in an agile way to keep improving end-results over time. Serverless can be considered an enabler for many of those practices.
1. It encourages components that do ONE thing
Many people argue whether microservices are better than monolithic applications, but it depends on a use case. Despite the differences in opinions about microservices across engineers, everyone seems to agree that it’s beneficial to build software components responsible for only one thing (the “Single-responsibility principle”).
Why components with single responsibility are beneficial?
- They are easier to change. As pointed out in the book “The Pragmatic Programmer”, making your software easy to change is a de-facto principle to live by as an IT professional. For instance, when you leverage functional programming with pure (ideally idempotent) functions, you always know what to expect as input and output. Thus, modifying your code is simple. If written properly, serverless functions encourage code that is easy to change, stateless and producing consistent and repeatable results.
- They are easier to deploy — if the changes you made to an individual service don’t affect other components, redeploying a single serverless function or containerized application should not disrupt other parts of your architecture. This is one reason why many decide to split their Git repositories from a “monorepo” to one repository per service.
With serverless, you are forced to make your components small. For instance, you cannot run any long-running processes with AWS Lambda. At the time of writing, the maximum timeout configuration doesn’t allow for any process that takes longer than 15 minutes. You could switch to a serverless container with services such as ECS, but the point is, you need to break larger functionality into smaller components.
When we talk about serverless, we are not limited to execution environments such as AWS Lambda or ECS. When you use other serverless components, you will notice that they are designed to do ONE thing really well (again, giving AWS examples, but the same relates to other cloud vendors):
- SQS — simple yet highly effective message queuing service,
- SNS — as the name suggests, a simple yet powerful notification service,
- SES — the same but for sending emails,
- S3 — the simplest service for storing data — the same is true for GCP’s cloud storage and Azure’s Blob storage.
There are much more services we could talk about to demonstrate this paradigm of doing one thing well in a serverless world, but you get the idea.
2. It enforces self-contained execution environments
Serverless doesn’t only force you to make your components small, but it also requires that you define all resources needed for the execution of your function or container. This means that you cannot rely on any pre-configured state — you need to specify all package dependencies, environment variables, and any configuration you need to run your application. Regardless of whether you use FaaS or a serverless container — your environment must remain self-contained since your code can be executed on an entirely different server any time you run it.
TL;DR: You are forced to build reproducible code.
3. It encourages more frequent deployments
If your components are small, self-contained, and can be executed independently from each other, nothing stops you from more frequent deployments. The need for a consolidation of functionality across single components still exists (especially when it comes to the underlying data!), but the individual deployments inherently become more independent.
4. It encourages the least-privilege security principle
In theory, your serverless components may still use an admin user with permission to access and do everything. However, serverless compute platforms, such as AWS Lambda, encourage you to grant the function permissions to only services strictly needed for the function’s execution, effectively leveraging the least privilege principle. On top of that, by using IAM roles, you can avoid hard-coding credentials or rely on storing secrets in external services or environment variables.
With small serverless components, you are encouraged to grant permissions on a per-service or even per-function level.
5. It allows you to achieve high availability and fault tolerance easily
Most serverless components are designed in such a way that they offer high availability (HA). For instance, by default, AWS Lambda is deployed to multiple availability zones and retries two times in case of a failure of any asynchronous invocation. Achieving the same with non-serverless resources is feasible but far from trivial.
Similarly, your containerized ECS tasks, your DynamoDB tables, and your S3 objects are, or can easily be, deployed to multiple availability zones (or subnets) for resilience.
6. It enforces Infrastructure as Code
There is great merit in treating your servers like cattle rather than pets. Most DevOps engineers that leverage the “Infrastructure as Code” paradigm would agree with that.
You’ve probably experienced this at some point in your IT career: you meticulously took care of installing everything on your compute instance and building all resources in such a way that this server is configured perfectly. Then, one day you come to the office, and you notice that your server is down. You have no backup, and you didn’t store the code you used to configure the entire system. And it turns out that you had some environment variables that were responsible for defining user access to various resources. Now all that is gone, and you need to start entirely from scratch.
We don’t have to look only at such extreme failure scenarios to see the danger in treating servers like pets. Imagine that you simply need a copy of the same server and resource configuration to create a development or user-acceptance-test environment. Perhaps you want to create a new instance of the same server for scale or provide high-availability.
With a manual configuration, you always risk that the environments can end up being different.
The serverless approach forces you to take a completely different perspective about defining the resources needed for your application. You are required to build a self-contained packaged code that can run on any server in an environment-agnostic way. If this server dies, you don’t lose anything since simply rerunning the serverless application provisions all new resources needed for it to run.
Is it more difficult? Of course, it is! But once you’ve built this repeatable process, you gain so many benefits, as discussed in this article.
7. It encourages using existing battle-tested components
If you decide on building a serverless architecture, it’s quite unlikely that you would end up building your own message queuing system or notification service. You would rather rely on common, well-known services offered by your cloud provider. Some examples based on AWS:
- Do you need a message queue? Use SQS.
- Do you need to send notifications? Use SNS.
- Do you need to handle secrets? Use Secrets Manager or Parameter Store.
- Do you need to build a REST API? Use API Gateway.
- Do you need to manage permissions or user access? Use IAM or Cognito.
- Do you need to store some key-value pairs or data objects? Use DynamoDB or simply dump data to S3.
Why is that beneficial? Given that software engineers are smart and talented people, they often start building their own, sometimes overly complex and difficult to maintain solutions when they get bored. Offering them a platform that provides standardized well-known, and well-documented building blocks (such as SQS, SNS, IAM, S3, …) that are fully-managed by the cloud provider can greatly improve the maintainability of the entire architecture. And the above-mentioned services allow us to build various types of projects in a resilient and decoupled way.
Things that are harder to accomplish with serverless
As with anything that comprises many small individual components, it’s often hard to see the bigger picture. It may become more difficult to see relationships between individual elements of a system and take action when some parts of your workflow fail. This is where platforms such as Prefect shine. It provides fine-granular visibility into the states of your workflow runs, and gives you confidence that your data platform is healthy, regardless whether your data flows run on serverless containers, local machine, on Kubernetes or within on-prem environments.
In this article, we investigated seven reasons why serverless platforms encourage useful engineering practices. Among them, we could see that it encourages small self-contained components that can be deployed independently of each other. We noticed that it also helps with resilience, security and high availability of the overall infrastructure. Finally, we looked at different serverless building blocks that allow us to build robust and cost-effective architectures, and how platforms such as Prefect help by providing visibility into the health of your data platform and reacting to failures in your data platform.
Thank you for reading!