One of the recurring themes I hear from customers trying to use OCLC APIs is difficulties with getting code up and running because they don’t have the expertise to set up and run servers to support an application. While Amazon Web Services EC2 and other server hosting options eliminate the need to own technical infrastructure, hosted servers typically still need some degree of “DevOps” work. So what options, if any, are available to libraries that want to create and run applications with minimal DevOps effort? One of the best options is serverless computing.
What is serverless?
In a serverless model, server management is completely hidden from the developer or operator. Developers create their code in a language supported by the platform and then deploy the code to the platform infrastructure. Many of these platforms support a pay-as-you-go fee structure in which a library would pay based on how much an application is used. Examples of this are AWS Lambda, Google Cloud Functions, and Microsoft Azure Functions.
One important aspect of serverless is that the code only executes application logic. Serverless infrastructure typically does not have the capability to store data. While serverless infrastructure itself doesn’t support storing data, serverless applications can utilize APIs, other cloud-based technologies, or serverless tools to store data or perform other tasks.
Why use serverless?
Serverless has several advantages for libraries. First, libraries do not need to purchase or maintain the technical infrastructure to host a serverless application. Second, libraries can defer server management to a provider for whom it is a core competency. Finally, because serverless applications have dynamically managed resources, libraries pay based on application usage. This can result in the overall cost of the application being reduced when the application has low overall usage.
There are a variety of cloud-based serverless platforms, including AWS, Microsoft Azure, and Google Cloud Platform. Originally, serverless was synonymous with FaaS (function as a service). FaaS is a type of cloud computing service in which developers can write, run, and manage applications without managing any of the complexity of the infrastructure. In this model, developers merely have control over the code that is deployed to the platform. However, over time “serverless” has become an overloaded term that encompasses a spectrum of tools and options that give developers differing degrees of control over the application code, server configuration, and provisioning.
Because in the classic FaaS serverless model developers only control code, a serverless application MUST be written in a programming language supported by the cloud provider. Different cloud providers support different languages. Node.js and Python are two commonly supported languages. This fall, AWS Lambda made a move to dramatically increase the number of supported languages by adding the Lambda Runtime API. Developers can publish their own runtime or use existing runtimes from the community. As a result, PHP code can now run on AWS.
Another important aspect of serverless to understand is that because these applications are on demand, they suffer from latency due to “cold start.” In the serverless world, the term “cold start” refers to the increased latency in a serverless application when it is invoked after not being used for a long period of time. This latency varies across supported language and is something that needs to be considered when creating a serverless application that has high performance demands.
A third facet to serverless is the fee model. Each transaction within a serverless application has a cost, and while “you pay for what you use,” the pieces of infrastructure your application is using and the cost of each can be a complex equation. A basic AWS serverless application typically uses the following resources:
- Lambda: runs application code
- S3: stores application code
- CloudFormation: manages deployment
- CloudWatch: manages application logs
Many of the applications discussed in this series use other resources with the AWS infrastructure, such as:
- Key Management Service - stores encryption keys
- API Gateway - handles http requests
- DynamoDB - data storage
Therefore, attention to costs need to be considered when choosing serverless. Applications that serve a constant high-volume stream of transactions may not be well suited to serverless.
Getting started writing a serverless application is similar to writing a traditional application. Beyond choosing a support language, developing a serverless application means determining the application’s data source. This can be an API or some other cloud-based service, such as S3, DynamoDB, or AWS RDMS. Additionally, if the application creates and caches files beyond runtime, the developer must decide on a persistent data store for the files. Serverless also presents the challenge of storing credentials securely in the serverless platform.
Over the course of the coming weeks, we’ll be covering several of these topics as we take a deeper dive into serverless technologies. We’re also offering two webinars on serverless technologies starting with “Introduction to Serverless Technologies” in March and followed by "Serverless in Practice" in April.
In our next blog post, we’ll start to look at serverless in practice in the context of an actual application.
Senior Product Analyst