The rise of cloud computing has revolutionized the way businesses approach their digital transformations. One of the latest trends to emerge from this technological shift is the adoption of Function as a Service (FaaS). FaaS is a cloud computing model that allows developers to create and deploy individual functions without the need for managing a full server or infrastructure. This approach offers a range of benefits, including cost savings, scalability, and reduced time-to-market. As a result, FaaS is rapidly gaining traction as the go-to solution for businesses looking to streamline their operations and drive innovation. In this post, we will explore the key reasons why Function as a Service is poised to become the next big disruption in digital transformations.

Faster, Cheaper, Simpler

FaaS is a recent development in the cloud computing arena and it was firstly introduced in October 2014 by the website (link to GitHub project). Since then, many big players such as Google Cloud, Azure, AWS and IBM have introduced FaaS as part of their product offerings.

What is FaaS?

Function as a Service (FaaS) is a new serverless cloud computing concept. It consists in implementing a solution that helps developers to manage functionalities of their applications without the complexities of building the infrastructure and maintaining it.

Serverless computing is actually a marketing jargon and it does not mean the complete absence of servers. It simply implies that developer does not have to manage the underlying servers and network infrastructure.

Why FaaS is the next big disruption in Digital transformations?

What makes FaaS a candidate for being the next big disruption in Digital Transformation? In this post I look at the top three advantages of FaaS.

  • Faster

With the operating procedures abstracted away, developers can focus on writing code. They can deliver features at a faster rate and iterate faster. Moreover, development teams can be smaller as they need not have individuals working on the infrastructure. Moreover, with developers not having to manage infrastructure and scalability, the operational cost reduces significantly.

  • Cheaper

In the case of bare metal servers and virtual servers, the practice is to reserve computing capacity, network bandwidth and storage. Developers would then deploy their applications on top of this infrastructure and were billed for the duration the infrastructure is reserved. The serverless computing model gets rid of computing capacity reservation.

  • Simpler

A monolithic application is broken down to smaller services usually called “microservices”. This architecture pattern is very similar to serverless computing wherein an application is broken down to functions. Hence, serverless computing is complementary to the microservice architecture and shares many of its advantages.

Use Cases

Serverless computing is ideal for workloads which have sporadic demands. They are ideal for workloads that are short, asynchronous or event-driven and concurrent work loads. I identify the following use-cases as particularly suited for serverless computing.

  • Database triggers

Event driven computing was one of the main drivers behind serverless computing. Functions as a service can be used to respond to changes in database, such as insert, update and delete operations. For instance, serverless functions can be used to write entries into an audit table whenever a record gets updated in the database. AWS Lambda is used alongside AWS DynamoDB40 to create database triggers.

Netflix adopted AWS Lambda to help manage their AWS infrastructure with an event-based systems.

  • Serverless computing at edge

Edge computing is described as a key driver for the serverless computing trend. IoT devices generate a large volume of data which need to be processed in real-time. Serverless functions at the edge can react to events, such as changes in temperature and water levels, without having to send all the data to the cloud.

Amazon provides AWS Greengrass to run lambda functions at the edge on local connected devices. These functions can also run when the devices are not connected to the Internet. Deploying the functions on the edge can improve the utilization of resource-constrained devices at the edge. Furthermore, by abstracting away the infrastructure details from the developers, code can be deployed on multiple devices. Various Content Delivery Network providers provide serverless computing at their edge infrastructure. Developers can use this to perform tasks such as to modify headers, to carry out A/B testing, to inspect authorization tokens and so on.

  • Media Processing

In media processing, an input file goes through various processing stages before it is ready to be served to the end user. For instance, when a raw image is uploaded by a user, a thumbnail of the same is to be generated which is then copied to a blob storage, followed by updating a database. The image might also be further processed for image recognition and other meta data extraction. All these steps are small processes but they need to run asynchronously and in parallel. Hence, serverless computing is ideal for this use case as the functions need to run only when an image is uploaded.

Why FaaS is trending among developers?

Developers hate maintenance. The concept of a serverless architecture is a dream for developers and it popularity is growing quite fast, see the google trends chart for “serverless” in last five years.

Google Trends for “serverless” in last five years.

On the other hand is hard to imagine FaaS as a complete replacement for normal application architectures.

Function based apps are a perfect fit for replacing potential microservice-style architectures and background type services. Depending on how a function is built, the developer can lower the cost even more by choosing the cloud supplier that fits best for their use.

With the flexibility of Function as a Service, applications can handle greater information flow without bottlenecks in the infrastructure and therefore, becomes more efficient and cost-effective.

Leave a comment

✍️ Hi there! Thank you for reading my post. Please feel free to leave a comment below. Your input is valuable to me and I would be happy to engage in a discussion with you. Thanks again for reading and I look forward to hearing from you!

Continue reading

I love writing about technology because it allows me to explore the endless possibilities and advancements of our world. It’s fascinating to see how far we have come and to imagine where we might go next. As a technology writer, I have the opportunity to learn about cutting-edge developments and share them with others, helping to educate and inspire the next generation of innovators. Plus, writing about technology gives me the chance to combine my passion for storytelling with my interest in emerging technologies and trends. Continue reading here:


  • Amazon API Gateway. Official AWS documentation.
  • Amazon DynamoDB. Official AWS documentation.
  • Amazon DynamoDB: Processing New Items in a DynamoDB Table. Official AWS documentation.
  • Microsoft Azure. Introduction to azure functions.
  • Danny Bradbury. Microservices: Small parts with big advantages.
  • Mohsen Mosleh Kia Dalili and Babak Heydari. Distributed or monolithic? a computational architecture decision framework.

Leave A Comment

Recommended Posts