Whodunit? Investigating data quality crimes


Picture the scene:

It’s shortly after midnight, and the street is quiet. The rain has stopped, but everything still glistens with moisture, and the moonlight lends the wet street a silver patina. With the exception of a stray cat foraging hopefully near a drain and the dripping leaves of the trees in the park, everything is still.
A sudden, shrill scream shatters the silence, sending the cat running for shelter, and causing a few heads to appear at newly-lit windows.

Twenty minutes later, a festoon of yellow “crime scene” tape decorates the trees, delimiting a small area within. The street’s silver patina has been replaced by alternating flashes of red and blue, and the street is market-day busy, crowded with uniformed police officers and curious onlookers straining for a better view of the shrouded body that lies just off-centre inside the cordon.

Now imagine you are one of the detectives attending the scene. You have only just arrived, and one of the other attending officers is escorting you to the scene, passing on the known information as you walk. You soon reach the covered mass inside the cordon, whereupon you bend down and lift the edges of the sheet, to get a better look at the victim. With a gasp of horror, you realise you recognise this body! It is your data, and a crime of significant magnitude has obviously been committed against it.

This little analogy (gratuitously and unashamedly embellished), was shared by the ebullient Danette McGilvray at a DAMA SA event last week, which I was privileged to attend.

She went on to make the point that root cause analysis of data quality issues is much the same as a crime scene investigation, which, indeed, is precisely what a data quality issue is: it’s a crime against your data! So let’s examine things a little more closely, but before we do, take note of the following pointers:

  • Maintain your focus on the particular problem you are trying to solve – you cannot solve all data crimes at once, so apply yourself to understanding this one intimately, and resolve it before moving on to the next.
  • Understand your environment: your systems, processes, data, business users, and the requirements pertinent to each, or you will end up solving the wrong crime, for the wrong audience.
  • With these in mind, let’s move on to solving the crime at hand!

    Step one: Cordon off the crime scene
    First off, we need to prevent valuable clues from being missed or obscured by people traipsing through the crime scene and tracking mud behind them. Unfortunately, we can’t keep things cordoned off forever, so after an initial investigation, photographs of the scene usually become the primary source of information. So after you’ve had a look at your victim, take some snapshots of the affected data, and move these elsewhere to work on later, thereby allowing business to continue unimpeded. Obviously, more crimes will be committed while we work, so we need to be quick about it!

    Step two: Examine the environment
    The environment around the crime may provide us with further clues, so perhaps we should take a closer look. Is there anything in that environment that looks suspicious, that may in some way have led to, been involved in, or indicate the origin of the crime? Look for people or processes that typically interact with or change the data: these are prime suspects in data quality crimes! Where was our victim going? This may well be the reason for changes to the data: to make it more acceptable when it reached its destination, not realising that there were other, later, destinations where these changes would cause our data to stick out like a sore thumb, attracting the attention of other data quality criminals. Perhaps the crime occurred earlier in the chain of events than we thought; where was our victim coming from? Did the crime perhaps originate there? We need to take a good look at our data’s movements, to determine exactly where these unwanted changes occurred. Perhaps one of our snapshots should cover the point of origin immediately before the crime was detected.

    Step three: Consider the timing
    Some crimes are more likely to occur at certain times of day than others. Was this the case here? Is there anything unusual about the time the crime was committed? Could the same crime have been committed as easily at a different time of day? Again, we need to have a good look at our victim, and try determine exactly when this dreadful data quality crime occurred. There may well be a relationship between the time and the environment – is it possible that these affected the likelihood of the crime occurring when it did? And there is almost certainly a link between our suspects and the timing, so examine the order of events: if this process had run before that one, would the result have been the same? Did one of our suspects perhaps delay the progression of our data victim long enough to allow the crime to be committed later? Why was our data involved with that suspect in the first place, is what I’d like to know!

    Step four: Interview the witnesses and “persons of interest”
    Who first noticed that a data quality crime had been committed, and what was it about the victim that alerted them? Perhaps it was a business user, who realised that the data that arrived was not what was expected, or the data was expected, but didn’t meet certain fundamental requirements. What else did our witnesses see, and what else do they know that could help us? Since many crimes are committed by those close to the victim, we should have a good look at our data’s close acquaintances – the people who interacted with it every day. What can they tell us about how the victim usually appeared or behaved? Which of these patterns was broken, resulting in this heinous crime against our data’s quality? Or did the victim appear normal and exactly as they would expect, indicating that the perpetrator may be found further down the line?

    Step five: Tell us whodunit!
    A careful analysis of the answers to these questions (and doubtless several more) should lead us to the solution we seek – to uncover how this crime was committed, why, and by whom. More importantly, we should have come to understand enough about it to prevent a recurrence: perhaps we need to tighten up security around our data a little; maybe we need to examine our processes and rethink their interactions with the data; we should probably have another look at the business requirements this data was supposed to fulfil – perhaps we are not dealing with one big data quality crime, but countless little ones that compounded one another, until our data was so fundamentally altered that it became unrecognisable.

    As we gain clarity, we can use this knowledge to educate those who work closely with potential victims of data crime, and enable them to take appropriate countermeasures, perhaps even nominate a few data deputies…. Hmm, Deputy Data – I like the sound of that. Somebody give me a badge!

    For more information on Danette and her work on data, see http://www.gfalls.com. And don’t forget to visit us at www.masterdata.co.za, to understand how we can help you investigate and solve crimes against your data.

    Advertisements

    Leave a Reply

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out / Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out / Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out / Change )

    Google+ photo

    You are commenting using your Google+ account. Log Out / Change )

    Connecting to %s