Developing high quality software inevitably requires some testing data.
You could be:
- Integration testing your application for correctness and regressions
- Testing the bounds of your application in your QA process
- Testing the performance of queries as the size of your dataset increases
Either way, the software development lifecycle requires testing data as an integral part of developer workflow. In this article, we'll be exploring 3 different methods for generating test data for a Postgres database.
In this example we'll be using Docker to host our Postgres database.
To get started you'll need to install docker and start our container running Postgres:
As you can see, we've set very insecure default credentials. This is not meant to be a robust / productionised instance, but it'll do for our testing harness.
In this example we'll setup a very simple schema. We're creating a basic app where we have a bunch of companies, and those companies have contacts.
This schema captures some business logic of our app. We have unique primary keys, we have foreign key constraints, and we have some domain-specific data types which have 'semantic meaning'. For example, the random string
_SX Æ A-ii is not a valid phone number.
Let's get started.
The first thing you can do which works well when you're starting your project is to literally manually insert all the data you need. This involves just manually writing a SQL script with a bunch of
INSERT statements. The only thing to really think about is the insertion order so that you don't violate foreign key constraints.
So here we're inserting directly into our database. This method is straight forward but does not scale when you need more data or the complexity of your schema increases. Also, testing for edge cases requires your hard-coding edge cases in the inserted data - resulting in a linear amount of work for the bugs you want to catch.
Since you're a programmer, you don't like manual work. You like things to be seamless and most importantly automated!
Postgres comes with a handy function called
generate_series which, ...drum roll... generates series! We can use this to generate as much data as we want without writing it by hand.
generate_series to create 100 companies and 100 contacts
We generated 100 companies and contacts here, the types are correct, but the output is underwhelming. First of all, every company has exactly 1 contact, and more importantly the actual data looks completely useless.
If you care about your data being semantically correct (i.e. text in your
phone column actually being a phone number) we need to get more sophisticated.
We could define functions ourselves to generate names / phone numbers / emails etc, but why re-invent the wheel?
Synth is an open-source project designed to solve the problem of creating realistic testing data. It has integration with Postgres, so you won't need to write any SQL.
Synth uses declarative configuration files (just JSON don't worry) to define how data should be generated. To install the
synth binary refer to the installation page.
The first step to use Synth is to create a workspace. A workspace is just a directory in your filesystem that tell Synth that this is where you are going to be storing configuration:
Next we want to create a namespace (basically a stand-alone data model) for this schema. We do this by simply creating a subdirectory and Synth will treat it as a separate schema:
Now comes the fun part! Using Synth's configuration language we can specify how our data is generated. Let's start with the smaller table
To tell Synth that
companies is a table (or collection in the Synth lingo) we'll create a new file
Here we're telling Synth that we have 2 columns,
company_name. The first is a
number, the second is a
string and the contents of the JSON object define the constraints of the data.
If we sample some data using this data model we get the following:
Now we can do the same thing for the
contacts table by create a file
my_app/contacts.json. Here we have the added complexity of a foreign key constraints to the company table, but we can solve it easily using Synth's
There is quite a bit going on here - to get an in-depth understanding of the synth configuration refer I'd recommend reading the comprehensive docs. There are tons of cool features which this schema can't really explore!
Now we have both our tables data model under Synth, we can generate data into Postgres:
Taking a look at the company table:
|1||1||Carrie Walsh||+44(0)117 496 email@example.com|
|2||2||Brittany Flores||+441632 960 firstname.lastname@example.org|
|4||4||Amanda Marks||(0808) email@example.com|
|5||5||Kimberly Delacruz MD||+44(0)114 firstname.lastname@example.org|
|6||6||Jordan Williamson||(0121) email@example.com|
|7||7||Nicholas Williams||(0131) 496 firstname.lastname@example.org|
Much better :)
We explored 3 different ways to generate data.
- Manual Insertion: Is ok to get you started. If your needs are basic it's the path of least effort to creating a working dataset.
- Postgres generate_series: This method scales better than manual insertion - but if you care about the contents of your data and have foreign key constraints you'll need to write quite a bit of bespoke SQL by hand.
- Synth: Synth has a small learning curve, but to create realistic testing data at scale it reduces most of the manual labour.
In the next post we'll explore how to subset your existing database for testing purposes. And don't worry if you have sensitive / personal data - we'll cover that too.