Serverless has its pitfalls. Here’s how you can avoid them

Written by Nicolas Dao.

Photo by Luiz Hanfilaque on Unsplash

In this post, I will share the lessons I learned over the past year while using Serverless to build mobile and web apps for a tech consultancy in Sydney. For each drawback, I will also recommend one or multiple solutions. Today, I’ll be covering the issues around the following:

  1. FaaS - Cold Start
  2. FaaS - Connection Pooling Limitation
  3. FaaS - No Support For WebSocket
  4. FaaS - Long-Lived Processes
  5. BaaS & FaaS - Loosing Infrastructure Control
  6. BaaS & FaaS - Compliance & Security

If you’re already familiar with FaaS, BaaS, and Serverless in general, then skip the following recap and jump straight to the next section Serverless Limitations & Remedies.

Quick Serverless Recap - BaaS vs. FaaS

There are currently two categories of Serverless solutions:

  • FaaS (Function as a Service)
  • BaaS (Backend as a Service)

FaaS is the category with the most buzz at the moment. FaaS is a piece of code that you can deploy separately from all your other systems. It usually reacts to events (for example, an upload to your cloud storage, an event added to a pub/sub topic, a change in a database, or even an HTTP request).

FaaS is a natural tool if you want to build an event-driven architecture or jump into microservices. The leading FaaS options are AWS Lambda, Google Cloud Functions, Cloudflare Workers, and Fly Edge Apps.

BaaS is a category with blurrier lines. The term gained popularity with the rise of services like Google Firebase that were branded MBaaS (Mobile Backend as a Service). Those services were typically offering the following three features:

  1. No-config backend that could theoretically scale infinitely and be highly-available and fault-tolerant.
  2. Extra middleware features built on top of that backend (for example, user authentication, push notifications).
  3. An SDK for many programming languages so that the developer can get started in minutes.

The term MBaaS became BaaS over time, while the Serverless term also started to gain momentum. More recently, AWS released new products like AWS Aurora Serverless, a Serverless SQL database. That database is a backend product — it is not a FaaS — and it does not tick the above strict BaaS definition. However, it falls into the Serverless category.

The industry is slowly making sense of this new Serverless paradigm. Big providers like AWS and Google are slowly re-branding some of their older products from PaaS (Platform as a Service) to Serverless.

So what is BaaS? Out of the above three features that described a strict MBaaS, BaaS is any system that fits the first feature. So to recap, BaaS are pieces of infrastructure that will host your app or its data with almost no configuration required. That type of backend should theoretically scale infinitely while still being highly-available and fault-tolerant.

Some popular examples of BaaS are:

Databases:

Hosting:

Data ingestion:

There are more details about this subject in What Serverless Is and How It Compares To PaaS and IaaS. If you’re interested in the general trend in which Serverless falls, you can also refer to How Serverless Is Automating IT Engineers & Redefining Tech Leadership.

Serverless Limitations & Remedies

1. FaaS - Cold Start

FaaS solutions like AWS Lambda have demonstrated huge gains when solving Map-Reduce challenges (for example, Leveraging AWS Lambda for Image Compression at Scale). However, if you’re trying to provide a fast response to events like HTTP requests, you’ll need to take into account the time required by the function to warm up.

Your function lives inside a virtual environment that needs to be spawned to scale up based on the traffic it receives (something you naturally do not control). This spawning process takes a few seconds, and after your function idles due to low traffic, it will need to be spawned again.

I learned that at my expense when deploying a relatively complex reporting REST API on Google Cloud Functions. That API was part of a microservice refactoring effort to break down our big monolithic web API. I started with a low-traffic endpoint, which meant that function was often in an idle state. The reports that were powered by that microservice became slow the first time they were accessed.

To fix that issue, I moved our microservice from Google Cloud Function (FaaS) to Zeit Now (BaaS). That migration allowed me to keep at least one instance up all the time (more about Zeit Now in my next post: Why We Love Zeit Now & When To Use It Over FaaS).

2. FaaS - Connection Pooling Limitation

FaaS conversations do not mention this limitation very often. Cloud providers market FaaS as a solution that could infinitely scale. While this may apply to the function itself, most of the resources that your function depends upon won’t be infinitely scalable.

The number of concurrent connections that your relational database supports is one of those limited resources. The unfriendliness of FaaS towards connection pooling is what makes this problem such a big deal.

Indeed, as I mentioned before, each instance of your function lives in its isolated stateless environment. That means that when it connects to a relational database (for example PostgreSQL, MySQL, Oracle), it should most probably use a connection pool to avoid reconnecting back and forth with your DB.

Your relational database can only manage a certain amount of concurrent connections (usually the default is 20). Spawning more than 20 instances of your function will quickly exhaust your database connections, preventing other systems from accessing it.

For that reason, I recommend avoiding any FaaS if your function needs to communicate with a relational DB using a connection pool. If you need to use a connection pool, then a few options are available:

3. FaaS - No Support For WebSockets

This one is kind of obvious. But for those who think they can have the cake and eat it too, you can’t hope to maintain a WebSocket on a system that is by design ephemeral. If you’re looking for a Serverless WebSocket, then you’d need to use a BaaS like Zeit Now instead.

Alternatively, if you’re attempting to create a Serverless GraphQL API, then it is possible to use Subscriptions (which relies on WebSockets) by using AWS AppSync. A great article that explains this use case in greater detail is Running a scalable & reliable GraphQL endpoint with Serverless.

4. FaaS - Long-Lived Processes, Don’t Bother!

AWS Lambda and Google Cloud Functions can run no longer than 5 and 9 minutes, respectively. If your business logic is a long-running task, you will have to move to a BaaS like Zeit Now instead.

For more details on FaaS limitations, please refer to AWS Lambda quotas and Google Cloud Functions quotas.

5. BaaS & FaaS - Loosing Infrastructure Control

If your product requirements necessitate some degree of control over your infrastructure, then Serverless will most likely leave you up the creek. Example of such problems could be:

  • Microservices deployment orchestration. Ending up with a myriad of Serverless microservices will quickly become a deployment nightmare, especially if they need to be versioned altogether or by domain.
  • Controlling the lifecycle of each server to save on costs.
  • Having long-running tasks on multiple servers.
  • Controlling the exact version of the underlying server OS, or installing specific libraries required by your app.
  • Controlling exact geo-replication of your app or data to ensure consistent and fast performances globally (there are ways to overcome this in some scenarios. Check out Build a serverless multi-region, active-active backend solution in an hour).

Serverless may fall short in all the above use cases. However, as I’ve discussed before, Serverless is just an extension of PaaS. To keep as much focus as possible on writing code rather than worrying too much about the underlying infrastructure scalability and reliability, leveraging the latest PaaS containerization strategies such as Google Kubernetes Engine can get you very close to what Serverless has to offer.

6. BaaS & FaaS - Compliance & Security

Serverless shares all the usual complaints related to the cloud. You are giving up control of your infrastructure to one or multiple third parties. Depending on the vendor, Serverless may or may not provide the right SLA and security levels for your business case.

Whether Serverless a go or no-go from a compliance and security point of view really depends on your particular case. Many articles discuss this topic in great details (like The state of serverless security).

Conclusion

Serverless is not a silver bullet. The gains you can obtain from it depend on your knowledge of it. The good part is that the barrier to entry is so low that you should be proficient in no time.

COMING NEXT…

Of course, Serverless has limitations. All technical solutions have them. The question now is how we overcome them. In my next post, I’ll write about suggestions my team and I developed to deal with those limitations: “Why We Love Zeit Now & When To Use It Over FaaS”.

Follow me on Medium - Nicolas Dao - if you’re interested in what’s coming next:

Current posts in this serverless series:

Future posts in the series:

  • Why We Love Zeit Now & When To Use It Over FaaS
  • Serverless Event-Driven Architecture: The Natural Fit
  • How To Manage Back-Pressure With Serverless?
  • GraphQL on Serverless In Less Than 2 Minutes

Serverless has its pitfalls. Here’s how you can avoid them. was originally published in freeCodeCamp on Medium.

Hi, great post! I wonder if anyone can contribute some insight into my decision regarding this: A scalable backend for my mvp app