Original post

rocketlaunchr.cloud

I’m the creator of the dbq package which aims to eradicate boilerplate code in database-related operations.

Recently I decided to overhaul the entire library to drastically improve its performance.

This article will explore how it compares to its nearest competitors in the eco-system (sqlx and GORM). dbq provides rich features such as automatic retry (for mission-critical queries) and transaction management (automatic rollback) which the others don’t, so the question of comparative performance had been lingering in the back of my mind.

This article will be a tutorial on how to benchmark. By the end of the article, you’ll find out if the payoff was worth the effort.

The article will use the Go language but the general concept is language-agnostic.

Benchmarking refers to measuring the performance of a particular algorithm or operation by measuring the time it takes.

There are two major uses cases for it. The first is to measure how much better a particular implementation of an algorithm is compared to another.

The second is to compare the performance of your library or product relative to your competitors.

Obviously, your hardware has to be consistent for the results to be valid.

In Go, benchmarking is baked into the language so it’s a first-class citizen. In other languages, you will need to find third-party tools for assistance.

The first step is you need to plan how to test your library against your competitors. In my case, the most common use case for all three packages is to probably query a database table and then hydrate the results into an equivalent data structure (unmarshal to a struct, in Go lingo).

The setup

Almost all non-trivial benchmarks will require some sort of setup and cleanup. In this case, the setup will involve seeding a MySQL database with 10,000 fake entries into a temporary table so that each of the libraries can fetch the entries.

The cleanup

Cleanup refers to undoing whatever you did in the setup back to the original state so that if you were to run the benchmark again it will produce near-identical results. In this case, we’ll just delete the temporary table.

setup and cleanup code

In the Go code above, the defer statement (at the bottom) can be used to guarantee that the cleanup occurs after the benchmark is complete.

I spent a lot of time and effort refactoring the library. I removed reflection and significantly reduced the number of memory allocations be reusing existing objects in memory. Since these take a few nanoseconds, the real gains only become apparent when you query a large number of rows.

Each library will perform the same 4 tests sequentially. They will each fetch 5, then 50, then 500 and finally all 10,000 rows from the table.

The results will also be hydrated to the same data structure.

Since we want to be as fair as possible to each library, we also follow the best practices. For dbq, a ScanFast method is expected for the struct. For GORM the table name needs to be returned by the TableName method.

The general structure of the benchmark will look like below. You can see that we iterate over the number of rows we expect to fetch. You can also see that each library’s operation (will be) within a b.Run() function. The Go testing package facilitates other nifty approaches for benchmarking, but this approach allows Go to do multi-core benchmarks amongst other cool features. We will be doing just a plain single-core benchmark in our case.

Structure of comparative benchmark

One major advantage of the b.Run() approach is that it allows us to do a “test-case” approach commonly used in unit tests. Since we have 4 different “sub-benchmarks”, we can use a for-loop and very elegantly perform a benchmark for each library. The resultant code above is clear and our intentions are easily understood.

The most important requirement for benchmarking is the inner for-loop.

for i := 0; i < b.N; i++ {}

Go’s benchmarking framework runs the algorithm or operation numerous times (sometimes thousands of times) until the algorithm or operation produces a stable run time. It does this by enforcing that you place your algorithm or operation inside the inner for-loop.

dbq benchmark

This is how we’d query a table and immediately hydrate a struct using dbq:

You can see how verbally efficient it is to use dbq. There is no boilerplate code wasted.

sqlx benchmark

GORM benchmark

The GORM case was interesting. When I first ran the “equivalent” operation, its performance was beyond terrible. Something didn’t smell right. Originally the gorm.Open call was inside the inner for-loop. Then I remembered way back in 2016 (when I was a GORM user), I made a complaint that gorm.Open should not call Ping() when supplied with a *sql.DB. The complaint fell on deaf ears. In the interest of fairness, I moved gorm.Open to the outside, despite its detriment to sqlx and dbq.

GOTCHAS

Since Go is a compiled language, sometimes the compiler can detect and remove code it believes is redundant without you even realising. For benchmarks, this is usually not desirable. You can find more information in this article by Dave Cheney.

We first save the benchmark in a file ending in _test.go. You can find the full benchmark here.

You can run it like so (no parallel):

go test -run=XXX -bench=.

The -bench flag is required to demand that benchmarks are run (and not just unit tests). -run=XXX is set as such to make sure that no tests are run (since no test’s name presumably matches XXX).

My hardware is:

My hardware specs
Benchmark results

The table shows each library’s performance for each test. The top section is when 5 rows are fetched. The next section is 50 rows being fetched. The bottom section is 10,000 rows being fetched.

The final column is the most important. It tells you how many nanoseconds it took for each operation. It can be seen that dbq and sqlx are more or less the same (dbq edges sqlx by a negligible amount) but as the number of rows fetched increases, gorm’s performance quickly degrades due to extensive use of reflection.

It is my recommendation that you should be using dbq for most intents and purposes.

Features