Skip directly to content

686K TPS with Spring Framework Web App and VoltDB

Tuesday, June 26, 2012 - 12:00am

Written by Andrew Wilson

 

We’ve recently put up a series of blog posts describing the components of a Spring-MVC web application, including VoltDB as the database, that saves votes being called in for talent show contestants. Today I’ll talk about what happened when we benchmarked the Voter application on Amazon’s cloud platform. The short story – running on a suitable EC2 configuration (see details below), we achieved 686,000 TPS for a Spring-enabled application using VoltDB.

The Benchmark Application

I’ll start by summarizing the aforementioned blog posts, but you are welcome to read them: Using the Spring @Schedule Annotation Using the Spring Converter API with VoltDB Data Objects and Building A High Throughput Web App with Spring-MVC and VoltDB.

 

The Voter application validates and stores phoned-in votes for talent show contestants. There is a single web page that displays the results and it updates every 400ms. There is a Spring Scheduled task that stores votes as rapidly as possible. Finally, there is another task that reports the VoltDB driver throughput every five seconds.

 

The benchmark application can be downloaded here. It includes a detailed README that explains how to build and deploy the application.

 

The VoltDB server cluster is running the Voter example found in the <VoltDB_Home>/examples/voter directory. The Voter example database is shipped with all editions of VoltDB.

The Benchmark Results

Using eight Tomcat nodes connected to a 12 node VoltDB cluster, each client node executed an average of 85,870 TPS for a total of 686,960 TPS.

The Environment

We previously posted 695k TPS with Node.js and VoltDB which ran on an Amazon EC2 cluster. We decided to use EC2 once again. We created a 20 node cc2.8xlarge cluster broken up into Tomcat and VoltDB servers nodes. The Tomcat nodes ran the benchmark web application.

 

The cc2.8xlarge provide the following, as described by the Amazon EC2 Instance Types page:

Cluster Compute Eight Extra Large Instance

60.5 GB of memory
88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core “Sandy Bridge” architecture)
3370 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
API name: cc2.8xlarge

 

These nodes were configured with:

 

Ubuntu Server 12.04 LTS for Cluster Instances AMI
NFS
NTP
Oracle JDK 1.7
Apache Tomcat 7.0.28 (client servers only)
VoltDB Enterprise Edition 2.7.2

 

A single node was configured with a common NFS mount and also acted as the NTP server. This server will be referred to below as the master. All other nodes connected to the master for both the NFS mount and the NTP server.

 

All nodes were mapped by internal EC2 IP address to a our hostname strategy, which was simply awbenchserver<N> where N was a number 1 through 19. These host names were added to a common hosts file that was copied to /etc/hosts. The master was referred to as awbenchmaster and it was also a participant in the execution of the benchmark.

 

NTP is necessary to synchronize and order transactions within VoltDB. The cluster performs best when the offset and jitter were at their lowest points. All nodes connected to the awbenchmaster and then synchronized. The servers were allowed to sit idle for approximately 15 minutes until the jitter and offset were in the sub millisecond range.

The VoltDB Cluster Servers

Twelve EC2 servers were used to run the VoltDB cluster. The VoltDB cluster ran on the NFS mount but stored all node specific data to  /tmp/voltdbroot.

 

Each node had five partitions (60 partitions across the cluster).

 

The VoltDB leader server compiled the catalog and then the server was started. The remaining 11 nodes were started and idled until the cluster was successfully initialized.

The Spring Clients

Eight client nodes were configured to run Tomcat from /tmp. The SpringWebVoter-1.war file that results from the maven build was copied to server local instance of Tomcat.

The Tomcat instances were brought up individually and confirmed that the successfully connected to the VoltDB servers. They were then left to run for 10 minutes. The log information was gathered up prior to terminating the Tomcat instances and then used to calculate the transactions per Tomcat server and across the entire Tomcat cluster. The Tomcat servers were terminated after all the statistics had been gathered. This avoided spikes in throughput where removing nodes would free up bandwidth for the remaining nodes and skew the test results.

The Transactions

The Tomcat servers “firehosed” the VoltDB cluster by calling Voter’s vote() stored procedure. This stored procedure was called continuously. The stored procedure performed four operations:

  1. Retrieve the caller’s location (select)
  2. Verify that the caller had not exceeded his/her vote maximum (select)
  3. Verify that the caller was voting for a valid contestant (select)
  4. If yes to all of the above, a vote was cast on behalf of that caller (insert)

Consequently, the 686,960 TPS performed 2,747,840 SQL operations per second, (three selects and one insert). Each insert also triggered an update to two different materialized views.

Observations & Notes

It was actually pretty exhilarating to bring up the EC2 cluster and watch the benchmark run. Running the benchmark yourself is relatively trivial, though I strongly recommend the cc2 cluster since you get guaranteed network and hardware performance barring the EBS drive. The set-up was a bit time consuming but really reliable across multiple runs.

 

Why use EC2 rather than a local cluster? A local cluster of twenty “bare metal” nodes would likely perform much better than EC2, but our throughput numbers would be nearly impossible to reproduce independently. That’s not a knock against EC2; it’s more about transparency. We wanted it to be possible for you to run the same configuration and roughly replicate our results.

 

Why did we use cc2.8xlarge rather than cc1.4xlarge? Each cc2 node comes with 60.6GB and the test can insert up to 686K rows per second or 41.2M rows per minute maximum. It is necessary to use the larger machines to ensure that there is enough memory to store all the data.

 

Could we have gotten a larger TPS rate? Yes, transaction count could have exceeded the 686K TPS rate. In smaller Tomcat clusters to the same VoltDB cluster, the TPS rate was greater than 130K TPS. This is an anecdotal number however, not a strictly measured one. It likely would have required adding four to six more VoltDB servers and three to four new Tomcat nodes to reach or exceed 1M TPS. Our main benchmark goal was to observe the throughput of our Spring Framework integration harness at scale, so we pulled the plug when we achieved that goal.

Benchmark Webinar

I will be giving a webinar talk about our Spring Framework reference implementation, including a review of our related benchmark activities, on July 12, 2012 at 2pm EDT. Please check our webinars page for more information.