View by Author

Most Recent Articles

Data quality management

Is Data Quality Management Worth The Effort?

By Published On: 28 July 2021Categories: Tech & Data

Charles Palmer, Anchoram's Enterprise Data & Information Management services lead, says that there are four key points to consider when managing your data.

Share This Article:

Organisations collect, store, and manage vast amounts of data to support their operations and decision-making activities. As data storage costs continue to fall, and the organisational appetite for more and more persistent information to inform decision-making expands, so do the problems created by poor, variable, or mismatched data.

Although we know about financial, operational, social, and legal issues associated with poor or inappropriate decision-making based on poor data quality, many organisations continue to fail to manage their data quality issues effectively; or even at all.

The proliferation of data and system digitisation has meant that huge amounts of effort and resources can get easily misdirected to non-critical data. As such, data quality management is becoming increasingly costly.

Key aspects of data management to look out for

Data management has many facets as it goes throughout its entire lifecycle from collection through to aiding corporate decision-making. These aspects and some pertinent questions include:

  1. Data is the atomic representation of some state or event. How accurate is your data and therefore your understanding of the state or event?
  2. Information is data in a corporate context. Do your information management systems reflect your corporate context?
  3. Human beings are still key. What knowledge and skills are required to translate information into viable decision-making processes in your organisations?
  4. ‘Commercial-Off-The-Shelf’ (COTS) products. There is almost always a misalignment within corporate taxonomy and ‘one-size fit all’ COTS database models. COTS typically nudge organisations to manage data reflecting the designer’s context representation. If it isn’t a definitive match, there is a risk that ‘good’ data may not be collected or be misrepresented.

The critical question is to decide ‘How best to interpret data for a meaningful purpose when trying to use your corporate taxonomy within the limits of COTS’s data labels?’

These are all significant questions for anyone involved in risk management and for government departments and businesses, to deliver the best services and/or products.

Adjunct Associate Professor at the University of Canberra and Anchoram Consulting’s lead Enterprise Data and Information Management partner, Charles Palmer writes:

Over a number of years, I have been exploring how to determine not just the accuracy, but the quality of information. In 2011, this pursuit manifested in a published thesis ‘An Approach for Managing Data Quality’. I have managed to apply those learnings in practical scenarios since then, and have further refined my approach to this important area.

In simplistic terms, I identified that when it comes to ‘errors’, we tend to fix what is easy – the ‘low hanging fruit’.  But shouldn’t quality assurance be focussed on high-value data? The answer, we hope is a resounding ‘Yes!’ This realisation then begs the question: ‘How do we identify high-value data?’

After various thought experiments, and using my day-to-day experiences working with various organisations, I developed a method that examines the inter-relationship of data.

In essence, I contend that based on the frequency that data gets associated with other pieces of data, who uses it, how it is used, and the strength of the relationships, the impact of data can be determined, and so provide a measure of its importance i.e. its value to the organisation.

This valuation approach assesses the business value of a modern database by applying six variables of value.  In arriving at these variables, I was inspired by the 19th Century English philosopher, political economist, and Member of Parliament, John Stuart Mill[1]’s work: ‘A system of logic, ratiocinative and inductive’ aka a connected view of the principles of evidence, and the methods of scientific investigation.

I researched the facets of data and information categories by evaluating views, reports, forms, the complexity of the views, the hierarchy use, and the businesses they pertained to. This enabled a unique perspective of each business or database and its purpose.

In this way, an organisation can determine what data really matters to its operations, and plot this against its vision and purpose.  This enables the organisation to focus on the data that matters and map it to the policy directives and the activities and, most importantly, identify the gaps between the two.

To find out more please join us at the Australian Information Security Association (AISA) Canberra Branch and ISACA Canberra Chapter face-to-face meeting on 29 July 2021 for a live and interactive presentation on reviewing, reporting and recommending improvements to organisational information.

 

[1] John Stuart Mill was a prodigy, taught Greek at the age of three, a free-thinker who became an early advocate for women’s rights, and among his many works and democratic theories, developed a theory of logic and information.

Charles Palmer, Anchoram's Enterprise Data & Information Management services lead, says that there are four key points to consider when managing your data.

By Published On: 28 July 2021Categories: Tech & Data

Share This Article:

Categories

Subscribe

Subscribe to our newsletter and get the latest news and information from Anchoram.

View by Author

Most Recent Articles

Author Profiles