Serverless computing is not just another new buzzword: it’s now radically transforming the way we build applications and services. Using cloud-based compute capacities, it facilitates code deployments as developers no longer have to think about server maintenance and provision. Moreover, serverless computing is generally deemed more cost-effective and easily scalable, and accounts for increased productivity and simplified backend software development.
As of today, AWS Lambda is one of the most frequently used frameworks for running code without server provisioning. In essence, AWS Lambda is code-centered – as soon as the events call for code execution, it automatically calls into action the cloud-based compute capacities needed to execute the code. The pricing model includes paying only for compute capacities used during code execution instances and excludes server maintenance. Sounds almost too good to be true.
Admittedly, though, because cloud-based services are pretty much still in their infancy, the use of AWS Lambda (as well as other serverless frameworks) is still associated with a number of problematic issues, including performance and resource limits. Because of infrequent use and intense workloads, serverless code may suffer from greater response latency (the so-called ‘cold start’) than the code running on an on-prem dedicated server or virtual machine.
Below we will explore how the choice of programming technology impacts AWS Lambda latency.
AWS Lambda Latency: Java vs NodeJS
So, which technology helps achieve better performance and deliver faster time-to-value – NodeJS or Java?
When it comes to performance testing, nothing is better than practical application. Our task was to implement a certain API that will be a microservice running on AWS Lambda. Before we got to work, we had controversial opinions as to which technology will be faster and more fail-safe for this task – NodeJS or Java. The advocates of NodeJS claimed that a Java code supposedly had a 40-second ‘cold-start’, when AWS had to copy an instance due to an intense workload.
We decided to conduct an experiment and implement one GET and one POST method for storing and receiving JSON data in Java and NodeJS and deployed these two microservices on AWS Lambda. We decided to use DynamoDB as a data warehouse, and for both cases we generated about 4000 test records and added them to our test repository.
After that, we decided to carry out load testing of both microservices using JMeter. Further, we performed the test for the following three cases:
1. After a certain period of time, we sent 1, 10, 20, 50, 100 requests at a time for POST and GET simultaneously.
2. Every second, we simultaneously sent 1, 10, 20, 50, 100 requests at a time for POST and GET. (1 GET and 1 POST, in a second 10 GET and 10 POST, etc.).
3. Repeated point (1) 10 minutes after point (2).
You can see the results in the tables below:
For each of the cases, we have prepared the charts to visually compare the response time for the two technologies.
Test case 1:
Test case 2:
Test case 3:
As you can see from the charts, NodeJS has approximately the same processing time for GET and POST requests, while Java processes GET for a relatively long timespan, but copes with POST much faster. So what are the reasons behind such unusual results?
We assumed that such a delay could be related to the fact that Java uses ORM to interact with DynamoDB while NodeJS just gives away directly what it receives. We decided to go ahead and conduct an experiment: rewrite the microservice in Java without using ORM. After that, we repeated the tests and received the following results:
Then we re-compiled the diagrams and compared Java without ORM with NodeJS.
Test case 1:
Test case 2:
Test case 3:
As can be seen from the diagrams, by using Java without ORM we were able to get almost identical results for GET and POST requests. The only thing that we would like to note is that when using ORM, saving a new record in DynamoDB took place much faster.
Again, we decided to go further, and for each of the above solutions (Java with ORM, NodeJS, Java without ORM) sent 500 GET and 500 POST simultaneously 5 times in a row.
You can see the results in the tables below:
In this case, we had 3.8% GET requests drop from 502 Bad Gateway.
In this case, we had 6.8% GET and 10.3% POST requests drop from 502 Bad Gateway.
In this case,15.2% of POST requests dropped from 502 Bad Gateway.
An illustrative comparison of the response speed in the test case where 500 GET and 500 POST were sent can be seen in the diagram below:
Summarizing all of the above, we can conclude that the speed difference of the AWS Lambda microservices implemented on NodeJS vs Java is not so great, provided that ORM is not used. As the study shows, it makes sense to implement POST using ORM, and GET without using ORM to increase the speed of performance.
As for the ‘cold start’ issue, it evidently exists for both NodeJS and Java with a very large number of simultaneous requests (> 1000). What proves this is the presence of 502 Bad Gateway responses in all three implementations.
Thus, and also, given that Java is a time-proven technology, we would recommend using reliable Java instead of a relatively new NodeJS.
Benchmark survey conducted by: Andrew Smiryakhin, Bohdan Boyko and Daniil Volkov.