In this article, we’ll demystify one of the main sales tactics used by Cloud providers: serverless & autoscaling features.
This article is a continuation of the previous one: “The main disadvantages of Cloud computing that you will notice when it’s already too late”.
where other (pre)sales tricks used by Cloud providers to acquire new clients are exposed.
Let’s start by clarifying the concepts of serverless computing and autoscaling.
Autoscaling is a method that dynamically adjusts the amount of computation resources based on the load.
For example, the number of servers running behind may be automatically increased or decreased based on the current load (e.g.the number of users or some batch jobs etc.).
The point of autoscaling is to provide more computing power during peak hours, and later, during times of low load, to reduce the number of active computing resources, which theoretically should lead to a decrease in electricity and data center cooling costs.
Alternatively, during low load periods, unused resources can be utilized for other tasks such as ETL jobs, re-training ML models, and similar tasks
Serverless computing is execution model where new resources are allocated on demand.
On-premises or cloud providers are taking care of the servers on behalf of their customers.
Main benefit of using serverless model for certain use cases is that developers of serverless applications are not concerned with capacity planning, configuration, management, maintenance, fault tolerance, or scaling of containers, VMs, or physical servers, and what is relevant for Cloud computing, that you “pay as you go”.
Serverless computing and autoscaling are often surrounded by many myths.
One of the biggest myths is that serverless & auto-scaling are exclusively tied to Cloud.
This claim is far from the truth, as there is a range of software vendors enabling the implementation of serverless & autoscaling in on-premise environments.
Some of the well-known ones include VMware, Canonical, Platform9, and many others.
Now that we know serverless & autoscaling can be used in both on-premises and cloud environments, let’s look at the main drawbacks of these two architectures.
1.
Complexity of debugging,monitoring and performance tuning
There’s an old saying that’s especially relevant in software development: Keep it simple stupid.
Unfortunately, the trends in IT development are moving in the opposite direction towards increasing complexity.
That means adopting autoscaling & serverless architecture will require a tremendous effort in developing monitoring, while troubleshooting will be a time-consuming task, and optimizing performance will be demanding.
When defining architecture, it’s often forgotten that there’s a whole range of architectures and computing models, each with its own advantages and drawbacks.
Just like you wouldn’t go skiing in summer clothes, the same applies to software architectures: you need to choose the one that best suits the task at hand.
2.
Vendor lock-in
Vendor lock-in particularly pertains to cloud vendors and to a lesser extent to on-premise solutions.
Once you choose a vendor, it’s nearly impossible to change later on.
The reason behind this is that each vendor has its specificities and implementation methods that are incompatible with others.
So, there isn’t a standard that defines compatibility among vendors for serverless & autoscaling.
3.
Latency of serverless computing model and slow(er) performance of IO related tasks
Similar to how the JVM (Java Virtual Machine) takes some time to initialize during the first launch, the same applies to serverless computing.
If the cold start latency is not acceptable, it’s advisable to use a different computing model, such as traditional virtualization.
When it comes to slow(er) I/O performance, memory and CPU-intensive applications tend to perform best in serverless computing thanks to the underlying technology which is based on Kubernetes.
4.
Applications must be tailored/developed for the serverless computing model from the beginning, and must target one particular vendor
Unlike other models (classic virtualization or partly with container-based apps), due to the specificity of implementing the serverless computing model, the entire application must be tailored to work exclusively with a single cloud vendor (see point 2).
In case the application needs to be migrated to another platform (e.g., from AWS to Azure), it requires a re-write of the architecture and the application itself, which represents a significant effort.
5.
Cost control (especially in the cloud)
Cost control, especially in the cloud, due to intentionally complex pricing policies, is almost an impossible mission.
Although all Cloud providers promise significant cost savings when using autoscaling & serverless architecture (a ‘pay-as-you-go’ model), in reality, the traditional virtualization model in on-premises environments is several orders of magnitude more cost-effective, and in the case of on-premises Kubernetes, the difference is even greater.
6.
Need for detailed configuration to achieve optimal autoscaling
When it comes to autoscaling, the main drawback is the need for detailed configuration among the offered options to achieve optimal scaling.
Furthermore, a significant challenge is accurately predicting the required resources during periods of high demand.
Below is a link to two articles discussing serverless and autoscaling architecture that might interest you:
https://www.infoworld.com/article/3706890/is-a-serverless-database-right-for-your-workload.html
https://www.infoworld.com/article/3705610/the-shortcomings-of-serverless-computing.html
Comments
2024-09-18 10:27:45