Avoid The Hidden Bear Trap – Benchmarks for Managing Interfaces and Data Complexity

Wrapping up this blog series, this is my fourth blog on Benchmarking as it relates to the Record-to-Report process.

Managing Interfaces

Hidden traps in Record-to-ReportInterfaces between local ERP systems and the corporate reporting pack represent a hidden ‘bear trap’ for the unwary. Typically, automatic interfaces are managed by ETL (Extract, Transform and Load) tools, often under the supervision of the IT function who frequently view the transfer of data as a technical exercise and may have little appreciation of the exact nature of the data being transferred.  Significant risk of error arises when information requirements change but the consequences for the integrity of the interface are not recognized in time.

For example a minor change, such as the addition of an expense line to the chart of accounts in a subsidiary’s ERP system will usually give rise to a change in the mapping tables that lies between the subsidiary’s accounts and the corporate reporting pack.  Similarly, in the other direction, additional information (perhaps a new statutory disclosure) requested by the corporate headquarters has to be mapped to the relevant operational system in the reporting entity.

A failure to amend the interface completely and accurately is common source of error and delay.  Internal benchmarks are required to report on the frequency and nature of interface errors in order to isolate and repair recurring control weaknesses.

Managing Data Complexity

Data complexity adds profoundly to the risk of failure in interfaces.  For example, take segmental reporting, which usually involves analyzing general ledger data in multiple dimensions (segments) to reflect the analysis required in the statutory accounts.  Mapping general ledger account code segments from operational systems to the corporate reporting pack has its hazards, but a more subtle form of error occurs when the GL codes are correctly mapped but the segmental analysis is not.

This kind of error can be difficult to trap and remain undiscovered for a considerable period of time because the core data appears at first glance to have been transferred completely and accurately. It is only at a much later date that errors begin to surface in segmental analysis by which time it can be very difficult to fix because of the distance the data has traveled up the organizational hierarchy.

So benchmarking the interface is more than saying how many interfaces failed during the period of review since there are different degrees of failure.  It’s the reason why simplified generic benchmarks can give false comfort.  Organizations should therefore always strive to develop benchmarks that meet their specific needs and characteristics.