A technology which promises to be a natural evolution of the cloud services model is called serverless. This article provides more information on the technology, how it works, illustrates this with a few use cases and explains what implementation options are available.
Evolution of the cloud service models
The term serverless first appears in an article by Ken Fromm of Iron.io in 2012. Even though the past few years applications have rapidly moved to the cloud in either a PaaS or SaaS service model, they still need to land on an application server and ultimately on a piece of hardware. Deployment and administration is massively centralised, but the underlying stack is still there – albeit less visible – and does indeed need administration.
This is important. Technologies such as Docker promise a great level of portability of applications, but there still is more to consider when deploying a container than just the application or its code. Frameworks, application servers, the underlying operating system (and their version numbers) are just a few examples – and their monitoring, management and maintenance.
Also, what we consider applications has changed rapidly over the past few years. Monolithic all-in-one applications are gradually replaced by a component architecture, where the components themselves continue to become more fine-grained and loosely coupled. Such a new architecture needs a new service model, beyond PaaS and SaaS. But by and large, serverless is just a new – different – way of treating workloads and background services.
Serverless, then, is about running code unconscious about where or how it runs.
Implications of serverless
Serverless does not mean the underlying infrastructure disappears altogether (and as such, the term is a misnomer), but implies that from a functional perspective dependencies between code, applications and servers are greatly diminished, as is operational maintenance and monitoring. Serverless also does not revolutionize the IT cost model in a similar way to the shift from on premise (CapEx) to Cloud (OpEx). It does, however, allow for a more fine-grained pay-what-you-consume: Amazon calls it subsecond metering.
An implementation of serverless
A serverless function (the term FaaS was introduced by hook.io in 2014) is one way of achieving a serverless architecture. With FaaS, devops can focus on writing and deploying code without provisioning or managing servers. Currently, there are two ways of using such an architecture: by deploying to a public serverless environment such as Amazon Lambda, Google Cloud Functions or Microsoft Azure Functions, or by running a specific private serverless environment such Apache OpenWhisk or Iron.io. Typically, the function is deployed to the serverless architecture. The architecture in turn takes care of running (and keeping available), scaling, managing and monitoring the function.
For the duration of the article, FaaS will be the implementation method of choice.
Not every application function can be translated to a FaaS. Serverless functions have the following (restrictive) characteristics:
A serverless function should execute, run and die as quickly as possible. A FaaS is not suitable for long running, memory and CPU intensive processes.
- Ephemeral: short lived and run once (single or parallel)
A function is short lived. For example, when hosted on AWS Lambda it can only run for a maximum predefined time of 5 minutes. The function is executed once (but massively parallel when needed) and runs once. This does imply that when designing a function, parallelism is a prerequisite.
- Event driven
A function is triggered by an ad-hoc (external) event, after which it works on the input it receives and then quits. The various serverless platforms have each defined a number of event types, such as messages received from a messagebroker (Amazon Kinesis), intervals or timers and file changes (a newly discovered file in an Azure Blob storage container). It can also be an inbound request from an API gateway.
- No TCP services
A function relies on external, third party resources (such as API gateways) to publish its results, or any other form of communication.
- No state
Needless to say, due to their run once and short-lived nature, functions do not maintain state. This design pattern is in line with the Twelve Factor app methodology for building software as a service. When state needs to be stored, it is necessary to rely on a third-party data store or cache.
- Managed by a third party
Provided you go for the online serverless platforms, you outsource operations to an external party in a way similar to IaaS and PaaS.
Disadvantages (or more positive: design/architecture considerations)
Whether or not the list of disadvantages below are actual disadvantages depends on how you view them. A FaaS has certain restrictions, but these are by design. The amount of languages supported by current serverless solutions is an actual disadvantage, but the number is growing as FaaS matures.
- (API) gateway needed
One of the characteristics of a FaaS is that it has no TCP services (or any distrubtion method, for that matter) built in. This means you rely on gateways for communication with the outside world, such as an API-gateway. A gateway typically transforms incoming (http) requests and routes them to the appropriate FaaS function and in turn transforms the function result to an (http) response, which it then delivers to the original requester. All offline and online serverless providers offer such a gateway, but of course this means extra cost.
- Startup latency
If your function is relatively small (a few hundred lines of code), you will notice relatively little startup latency. When functions become bigger, however, latency increases. Using a JVM-based language (Scala, Clojure) will increase that latency because the JVM needs to be spun up. The actual amount of startup latency therefore varies from a few milliseconds to a few seconds. This implies that FaaS is better suited for functionality where latency is no big concern, asynchronous implementations or implementations where messages flow constantly – which prevents renewed JVM spin-up.
- Administration and monitoring are still happening
It’s not like administration is going anywhere: it still happens, albeit less visible. And administration is done by real people, with just a higher level of automation (which is not necessarily a good thing). Also, debugging, security, monitoring is all quite hard with the current state of serverless. This might change in the future as the technology matures.
- Limited number of programming languages supported
For Google and Amazin, the number of supported programming languages is limited to Java, Node.JS, Python and C#. Additionally, with AWS Lambda Ruby is possible through JRuby. In Azure F#, PHP, Python, C# and Node.JS are supported.
However, in AWS Lambda it is possible to run an executable that is packaged with the function, which opens up possibilities for running any language. In Azure, something similar is possible.
- Limited metrics
As stated before, the limitation on certain metrics is more often than not an architectural decision. For example, a function that runs fast and only once should not be allowed to run for more than 5 minutes. Even so, these are the limits. Execution time ranges between 5 (AWS) and 10 (Google and Microsoft) minutes. Memory ranges between 128Mb and 1.5Gb (AWS and Microsoft) and 2Gb (Google). Available ephemeral disk space is 512MB (Amazon, Google and Microsoft).
- Testing (performance) needs rethinking
An entirely new testing strategy that focusses on expected output is required. Such a strategy is not dissimilar to how containers, PaaS or SaaS is tested, however. On a positive note, testing a single function should not be too hard. Integration testing, however, might prove more challenging.
As applications are split up in separate functions, implementing a security strategy becomes more important and inherently more complicated. One could argue that – contrary to for example containers – the security implementation shifts from the application to the serverless provider. Also, the run-once and short duration of a function decreases the attack surface. And finally, most serverless providers have a limit on the number of functions that can be executed concurrently, which reduces the impact of a DDoS attack. On the other hand, any service that you outsource means another level of security you need to be aware of, and – worst case – have no control over. By outsourcing you might also lose or fragment a security setup that worked properly before. Finally, maxing out the number of services in case of a DDoS attack also maxes out your service bill.
Serverless compared to microservices
Microservices are all about breaking up application functionality to such an extent that the smallest possible usable functionality remains. This functionality can then be shared within the overarching software architecture, which for example means all applications call the same service when user management or billing is concerned. For every service, there is a well-defined set of rules of engagement (the service contract) and a usage manual (the API). As with containers (and considering containers come as a delivery vehicle for most microservices, this should come as no surprise), a microservice still relies on an underlying mechanism that you need to manage and maintain. Sometimes a FaaS is mentioned as a nanoservice, but I think this does no justice to the difference between the two.
Containers or Serverless?
Container management platforms such as Kubernetes or entire DevOps suites such as OpenShift are rapidly evolving and start to offer some of the advantages that serverless architectures do, such as rapid (auto)scaling. Containers could be a delivery mechanism for FaaS, with OpenFaaS as an interesting example. Ultimately, which architecture pattern to use greatly depends on the DevOps engineer and both container and serverless architectures have their advantages and disadvantages. It will be interesting to see in which direction both develop.
A real world example
The combination of FaaS and an API gateway offer powerful possibilities when designing single page web-based applications. Serverless functions are also used as the integration layer between various services. Finally, serverless functions can be used to outsource specific resource intensive functions such as in the example listed on Amazon Lambda’s case study page:
“When a screenshot is taken, the binary image data are first uploaded to Amazon Simple Storage Service (Amazon S3). An Amazon S3 event then triggers AWS Lambda for image processing. After this, queues containing the binary image data output from Amazon Simple Queue Service (Amazon SQS) are imported into the on-premises servers to save the processed image data.”
What will it bring?
Serverless architectures hold a lot of promises, which the marketing machines advertise frequently. Instant, unlimited and continuous scaling, a must have for the modern enterprise. A clear serverless reference architecture is not yet available, however. There is agreement on a few assumptions, such as that workloads that are CPU intensive or require access to loads of data – such as search or batch jobs – should run on a server, accessed by a FaaS. Functionality that requires authentication could be provided by a FaaS interacting with a third-party API. The user interface and all logic related to it runs in the client. Of course, such decisions are application specific and require thought and planning.
Other than architectural marvels, FaaS opens up the possibility to decrease costs associated with hosting more than with IaaS or PaaS. Also, it becomes possible to meter (and pay for) actual usage – where actual is even more actual than with SaaS, down to for example units of 100ms in the case of AWS Lambda.
A glimpse into the future of serverless is available as well: Amazon has already introduced a next level implementation of AWS Lambda in the form of AWS Greengrass (https://aws.amazon.com/greengrass/). It enables edge computing by bringing cloud functionality to local (IoT) devices and allows developers to run serverless code on these devices, either offline or interacting which can then interact with the Cloud.
What implementation options are available?
Two different implementation options are available. In the public cloud, the aforementioned Amazon Lambda, Microsoft Azure Functions and Google Cloud Functions are the main options (even though there are other, independent vendors offering their own solutions). Amazon has the most advanced version available, with some interesting development tools such as Apex. The disadvantage of such a solution is the commitment you make to a specific solution, without knowing (due to the relatively new nature of the technology) whether you will be able to move away at a later stage.
For an on premise, there are a few standalone frameworks available – the open source IronFunctions by Iron.io, or Apache OpenWhisk. Next to this, a lot of development is going on for Kubernetes, such as Fission, Kubeless and Funktion, the latter of which also works on Red Hat OpenShift.
An interesting list of serverless frameworks and solutions is available on GitHub.