Test data management

There is a significant data management aspect to testing – the test data management. Let’s have a quick intro and have a glimpse on project management reality on testing – then,  look at the test data management a bit more in detail.

Software testing

How long you have to test a Dacia to turn it to a Mercedes?

I’ve heard the above quote during university from one of our professors. Although the answer is obvious a relatively big amount of software projects still somehow believe that poor design can be corrected during the testing phase.

In reality the amount of code going into production without proper testing is surprisingly big (source of the image).

 

testing-is-for-wimps-real-men-test-in-production

 

 

Excuses

Sometimes we hear excuses why not to do testing properly. Here some of them:

  • I’ve heard already statements from a vendor that code-level unit tests were enough and no further testing should be done. That is just simply not true.
  • If you are in the custom software development business OR you customise a piece of existing software you must keep track of your business requirements and maintain the changes of those during the project until you hand over the software. How you do this depends a whole lot on your methodology, but there is no way avoiding this. The granularity of the requirements should reach a level so that they are testable: also you can objectively decide if a requirement was fulfilled or not. Please do not think this is an expensive “gold plating”. If you do not have a control over your requirements you will go into a trial-and-error loop, just like the one we described in this blog entry.
  • One of the other excuses in the top 5 is that software supporting the organisation in the testing is expensive. If you hear this excuse just check out some open-source testing software like TestLink & co. (likewise for bug tracking you could use Mantis).
  • If you hear someone saying the amount of labor put into setting up these tools are huge and you should use Excel to manage the test cases and the execution then please ask this person to measure the time needed to manage the efforts in lack of a centralised tool.
  • If you hear that there are no resources to do regression testing (testing if new function does not destroy properly working functionality) you might think about using robotics to automate at least a part of the testing.

Test data management

Now imagine for a moment that in a software project proper amount of resources went into the requirement assessment and there are proper test cases to execute.

What often times is still missing is the proper test data and an agreed way how to manage it e.g.:

  • Ensure how to put the system into an initial state “ready for testing” before a test run
  • Ensure your test data supports retesting in case of bugfixes
  • Ensure you have proper test data for multiple test runs (sequentially/parallel)
  • Know which data can not be reused (e.g. certain identifiers) and how to generate new data in a systematic way
  • Ensure that the above is conducted as a routine task – in an optimal situation in an automated manner

Please note if you make end-user trainings with a training system you have the same challenges to solve.

Here are some recommendation who to put together a proper test data set:

  • Tests should be reproducible. Optimally you should be able to restore the initial state of the system (before the testing take place) easily by e.g.:
    • Having a virtual image of the systems in place that you restore+patch+upgrade  and store before each test round. You can think about putting such a solution to AWS or to Azure (or the like) – if your architecture is modern enough.
    • Backing up the database and restoring it (usually not easily possible with multiple systems integrated)
    • Generating test data with robots or even manually before the test run and assigning the test data to the proper test cases
  • Systems are integrated. This means that during testing you have to consider that you have some limitations.
    • Sometimes it is possible to have a fully separated test environment with all the integrated systems. If you have this lucky situation you can usually think you have a “single system” to deal with.
    • If this is not possible (usual case) you should think about system-level data consistency rules. (Well-well, if you have a proper model of your data, that helps.)
  • Automate-automate-automate
    • If you have 30% more initial efforts to automate test data generation, just invest. The more often you test the bigger the gain will be on your original efforts.
  • Privacy
    • There are cases where test data must be very close to production data. Should this be the case consider privacy rules.
    • Whenever possible just please make your lives easier and use non-productive data for testing. Some database provider offer cloning features with data masking/scrambling.
    • Mostly with data & analytics projects you have algorithms (e.g. grouping, classification etc.) that must be trained with productive data. Please note that this is not testing, and a separate, restricted environment can be needed. As a result of such training runs a model is constructed, that is usually small in size. This model usually does not contain any sensitive information by itself and hence can be transported into any other systems – including the test system. Please note however that a model working well with productive data can be useless when meets training data.

Quality

A typical (suboptimal) timeline in many project dealing with data integration (warning, provocative):

  1. What is the data we need? – Do workshops
  2. Where is the data? – Look at the documentation/Glossary — if there is one
  3. Challenges like: Oh, wow, we meant some other data… Oh, wow, why are these fields empty?
  4. All set, we are writing ETL/WS/… code to integrate the data
  5. No data could be loaded/integrated, lots of errors
  6. Repeat 3-4-5 in a trial-and-error loop as data quality is not good enough
  7. NO ERRORS FROM THE LOAD PROCESS 🙂 We are ready! (Fanfares)
  8. Oh, no, business says data does not make sense – data quality is not good enough
  9. Work additional 2 months (the timeline can be anything from 5 days to 12 months) repeating steps 1-2-3-4-5-6-7 and 8 until data quality will be good enough
  10. OK, now most of the data makes sense (Fanfares again)

Of course the above is exaggerated but everyone who were involved in data integration knows that this is not that far away from reality.

There is a lot of things you can do to make this loop much shorter (see in a later post), and you can also have a historic view how similar situations arise.

In this post I will point out just two aspects:

  • Do you see how many times bad data quality is mentioned? But what is exactly bad data quality? Can we measure data quality in an objective manner?
  • Sadly the point #7 is where some projects way to often think that the integration work is finished. In fact by that time you are not even half way through. Why is this?

Before answering I need to cover a bit of theory. The theory of relational databases together with the common implementation teaches us how to put together a database structure that guarantee certain aspects of data quality. Possibilities to ensure good data are (without mathematical precision):

  1. You can define data types (text, number etc.) – but of course you just can store all numbers as texts…
  2. You can set up keys (also unique, non-empty values that identify e.g. a car, a person etc.) – but you are allowed in most database systems not to
  3. You can set up referential integrity constraints (e.g. there is no credit card without a cardholder) – but you are allowed not to
  4. You can define domains like size for humans is something between 0-270 cm – but you are allowed not to
  5. You can define patterns your data must have like if some text is a valid phone number – but you are allowed not to
  6. You can even define more complex rules/programs that allows you to check every aspects of the data you can possibly think of  – but you are allowed not to

The reality is that these are possibilities to enhance the data quality of your data that a project may or might not implement. The more of these you are willing to implement the more you have to know your business rules. The more of these you implement in fact the slower will your system be (last time this was a valid excuse in the early 2000’s).

 

So back to the above questions:

  • Data quality can be measured ultimately with validation through business rules. The more data complies with the business rules the better the quality is. Sometimes not all business rules are known in advance. Sometimes they change and the data management is not updated. Sometimes the business rules are so complicated that coding them does not worth the effort. Sometimes there is not enough time or money to implement all rules. In short: data quality is usually not fully known before the data integration begins.
  • When the data integration stream reaches point #7 only that subset of business rules is validated that is somehow implemented. Usually this is only the fraction of the existing business rules. Pont #8 and #9 is nothing else but figuring out the business rules that are not stored in any systematical way and trying to clean up using the new knowledge.

Are there ways to do this better? Definitely: with data strategy you can do a lot to get out of such troubles. I’ll share you some best practices in a future post.

Did you make a different experience? Do you have a different view?