Posted By Gbaf News
Posted on July 3, 2014
“By Robert Gothan, CEO & Founder of Accountagility – the leading business process management specialist”
Being a habitual and heavy data user, it naturally follows that I enjoy my coffee rather a lot. Coffee is singularly responsible for getting me through many long, dark days trying to make data processes deliver numbers that actually mean something, and which I could prove to ever-looming auditors.
In an industry that is heavily, dangerously dependent on the use of spreadsheets, it’s important to note that any form of manual data entry introduces significant process risk of human, clerical error. More worrying still, most firms are probably not even aware of just how many spreadsheets they are creating – or whether anyone else might understand them – or the fact that research shows more than 80% of spreadsheets contain important structural defects.
This unwavering focus on results, without consideration of process, is exactly what leads to risk. Of course, once firms start to throw some heavy machinery at the problem, they may start to analyse, design, and implement their processes in a more formal space, and thus pave the way for greater automation. After all, isn’t automation meant to reduce risk by ensuring process consistency, reducing manual interventions and strengthening the controls around the process (typically change, data security and audit controls)?
By deciding that process is everything, however, firms could be mistaken for thinking that they’ve ‘automatically’ reduced risk to an acceptable level. User experience, however, shows that the answer is not quite as simple. Automation, when not done correctly, can actually increase risk. For example, a firm’s risk profile is likely to take a turn for the worse when users have to ‘mop up’ after processes that do not adequately address new business challenges.
As such, unless they take a keen look at the process outcome, firms will just be switching one set of risks for another. To return to the coffee analogy, when the time comes to brew that important first pot, they may find it difficult to know whether they’ve actually started with the right bag of beans.
And the comparison doesn’t end there. Creating the perfect cup of coffee requires good quality ingredients, excellent technique and a flourishing delivery. In fact, one could argue that coffee nirvana can only be the end result of a manufacturing process that has been designed to guarantee user satisfaction.
But what happens when a manufacturing process goes horribly wrong? Product recalls. In the data world, there are “product recalls” of data, reports, and other output every day, and people seem to accept this as the norm. I strongly argue that this is time consuming and resource draining in many organisations, and is an important symptom of systemic risk. As such, even though firms might decide against re-inventing the wheel completely, they may want to put some effort into designing a robust data production line.
The first step in this process is typically data acquisition, often from different parts of the business. These are the raw materials from which any results will be conceived, and so require constant scrutiny. Bringing data together from disparate systems creates its own set of dynamics, however. They look like bricks, but do they fit together? The typical data (risk) management infrastructure is spaghetti junction; it is a tangled web of data and non-relational legacy systems that don’t ‘talk’ to each other. Validation is therefore required to prove whether or not the data actually fits.
Once this has been determined, firms will need to crunch this data into something useful. Again, as in manufacturing, quality assurance (QA) needs to play a central role in the process, from start to finish. Once the right data controls are in place, these transformation processes are much easier to design and run like clockwork. Likewise, where validation and processing steps are combined, firms are able to create “self-testing” processes, which surely must be the holy grail of data processing.
At this point in the journey, it’s time for firms to present their data. I say ‘present’, rather than ‘report’ intentionally, since the promise is entirely different. Pure reporting has three downsides. Firstly it is one-way traffic, and neither requires nor empowers the end user to do anything more. Second, reports don’t generally give the audience a sense of how they got there. When I get a report, my analysis inevitably leads to one dreaded question – “so… where DID that number come from?”
Lastly, there is always the risk that reporting and other presentation activities are simply there to validate the numbers. Rather than becoming a limitation, presentation should empower users to understand what they have been given, even enabling them to track back to source if need be. It should also provide a clear basis for process feedback.
This ‘Acquire-Validate-Process-Present’ model provides a solid framework within which most processes can be comfortably optimised and de-risked. Implicit in this model is the need to integrate new knowledge about a firm’s data and processes back into the processes themselves through both user feedback and QA. This ‘process intellectual property’ is extremely valuable, and businesses need to adopt working practices that prevent its attrition.
Generally speaking, firms should also be moving towards a self-service, less IT dependent environment – one where risk data from multiple systems and sources can be quickly accessed, and where information to support decision making is always readily available. This end user empowerment approach will make it much easier to adopt a consolidated, single view of risk by eliminating disparate systems and individual data silos where information is hard to access and often impossible to check.
To summarise, the sources of process failure can often be pinned down to outdated assumptions, the lack of adequate data and process validation, and uncaptured intellectual property (including manual workarounds). If firms are truly serious about reducing risk in our data processes, it’s important to realise that machinery can help, but it cannot de-risk a process alone.