Benchmarking Crunchy Data for latency

Vishnu M

Vishnu M

October 15, 2024

Benchmarking Crunchy Data for latency

In Rails world 2024 DHH unveiled Kamal 2 in his opening keynote. Now folks want to give Kamal a try but some people are worried about the data. They want to take one step at a time and they feel more comfortable if their database is managed by someone else.

That's where Crunchy Data comes in. They provide managed Postgres service. Checkout this tweet from DHH about Crunchy Data.

In our internal discussion one of the BigBinary engineers brought up the issue of "latency". Since the PostgreSQL server will not be in the same data center, what would be the latency. How much impact will it have on performance.

We didn't know the answer so we thought we would do some benchmarking.

Benchmarking

To do the comparison we needed another hosting provider where we can run PostgreSQL on the same datacenter. We chose to work with Digital Ocean.

To compare the two services, we wrote a benchmark script in Ruby similar to the one Ben Johnson wrote in Go for his GopherCon talk in 2021.

In this benchmark, we're using Ruby's Benchmark module to measure the performance of a series of database operations. Here's what the code does:

  1. It establishes a connection to the database only once, at the beginning of the script. This is done outside the benchmarked operations because establishing connections can be a slow operation because of TLS negotiation, and we don't want to account for that time in our measurements.

  2. It then performs the following operations 10,000 times, measuring each one:

    • Drops a table named 't' if it exists.
    • Creates a new table 't' with two columns: 'id' (an auto-incrementing primary key) and 'name' (a text field).
    • Inserts a single row into the table with the name 'jane'.
    • Selects the 'name' from the table where the 'id' is 1 (which should be 'jane').
  3. After all 10,000 iterations, it calculates and prints the average time for each operation in microseconds.

1require "pg"
2require "benchmark"
3
4class PostgresBenchmark
5  def initialize(connection_string)
6    @conn = PG.connect(connection_string)
7  end
8
9  def run(iterations = 10_000)
10    total_times = Hash.new { |h, k| h[k] = 0 }
11
12    iterations.times do |i|
13      puts "Running iteration #{i + 1}" if (i + 1) % 1000 == 0
14      times = benchmark_operations
15      times.each { |key, time| total_times[key] += time }
16    end
17
18    average_times = total_times.transform_values { |time| time / iterations }
19    print_results(average_times, iterations)
20  ensure
21    @conn.close if @conn
22  end
23
24  private
25
26  def benchmark_operations
27    times = {}
28
29    times[:drop] = Benchmark.measure { @conn.exec("DROP TABLE IF EXISTS t") }.real
30    times[:create] = Benchmark.measure { @conn.exec("CREATE TABLE t (id SERIAL PRIMARY KEY, name TEXT)") }.real
31    times[:insert] = Benchmark.measure { @conn.exec("INSERT INTO t (name) VALUES ('jane')") }.real
32    times[:select] = Benchmark.measure do
33      result = @conn.exec("SELECT name FROM t WHERE id = 1")
34      raise "Unexpected result" unless result[0]["name"] == "jane"
35    end.real
36
37    times
38  end
39
40  def print_results(times, iterations)
41    total_time = times.values.sum
42
43    puts "\nAVERAGE ELAPSED TIME (over #{iterations} iterations)"
44    puts "drop    #{(times[:drop] * 1_000_000).round(2)} microseconds"
45    puts "create  #{(times[:create] * 1_000_000).round(2)} microseconds"
46    puts "insert  #{(times[:insert] * 1_000_000).round(2)} microseconds"
47    puts "select  #{(times[:select] * 1_000_000).round(2)} microseconds"
48    puts "TOTAL   #{(total_time * 1_000_000).round(2)} microseconds"
49  end
50end
51
52if __FILE__ == $0
53  connection_string = "<DB_CONNECTION_STRING>"
54  benchmark = PostgresBenchmark.new(connection_string)
55  benchmark.run(10_000)
56end

Database Specifications and Setup

Digital Ocean

  • Region: NYC3 datacenter
  • Specs: 2 vCPU, 4GB Memory
  • Price: $60 per month

Crunchy Data

  • Provider: AWS
  • Region: us-east-1
  • Specs: 2 vCPU, 4GB Memory (hobby-4 plan)
  • Price: $70 per month

We provisioned a digital ocean droplet in the NYC3 data center and invoked the benchmark script from the machine. The Digital Ocean database was also in the same NYC3 data center. For Crunchy Data, the availability zone selected was us-east-1 as it was the closest to NYC3.

Benchmarking results

Digital Ocean

1AVERAGE ELAPSED TIME (over 10000 iterations)
2drop    3448.07 microseconds
3create  5048.39 microseconds
4insert  891.81 microseconds
5select  584.17 microseconds
6TOTAL   9972.44 microseconds

Crunchy Data

1AVERAGE ELAPSED TIME (over 10000 iterations)
2drop    10097.89 microseconds
3create  16818.63 microseconds
4insert  8416.35 microseconds
5select  7211.42 microseconds
6TOTAL   42544.29 microseconds

Benchmarking analysis

The results of this benchmark do not come as a surprise. Digital Ocean is performing significantly better than Crunchy Data. This performance difference can be primarily attributed to network latency.

Network latency refers to the round-trip time (RTT) it takes for the data to travel from its source to its destination and back again across a network. In the context of database operations, it's the time taken for a query to be sent from the client to the database server and for the response to return to the client.

In our benchmarking, the Digital Ocean database and the client machine invoking the script were both located in the same data center (NYC3), resulting in minimal network latency. On the other hand, the Crunchy Data database was hosted in AWS us-east-1, and it had to communicate across a greater physical distance, adding to latency.

To get a more accurate value for the network latency, we can compare the average time taken to run the SELECT operation. The SELECT operation in the script is a point query. A point query refers to type of query that retrieves one or several rows based on a unique key.

In our script, it retrieves a single name value from the table t where the id is 1(which is the primary key), and is very fast to execute. Thus, the time taken to execute the SELECT operation can give us an approximate value for the network latency.

1db_time = network_latency + query_execution_time

For point queries the query_execution_time is almost zero so all the time taken is pretty much "network latency".

1db_time ≈ network_latency

If we look at the benchmarking result then we can see that for "select" operation time taken by Digital Ocean is "584 microseconds" and for Crunchy Data it is "7211 microseconds".

The difference in network latency is 6627 microseconds. That is 6.6 milliseconds.

This value over multiple queries can add up and can have a significant impact on the overall response time of your application. To put this into perspective, for a web application that makes 10 sequential database queries to render a page, this could add up to about 66 milliseconds to the page load time. Now this could be an acceptable limit if your page anyways loads in 3/4 seconds. However if you are trying to load your page in 200 milliseconds then you need to watch out.

Ensuring that database is always up and it's properly backed up is a non trivial problem. Latency notwithstanding Crunchy Data takes care of running database. This gives us peace of mind and allows us to exit cloud one step at a time.

If this blog was helpful, check out our full blog archive.

Stay up to date with our blogs.

Subscribe to receive email notifications for new blog posts.