In our first two blogs (Part 1 & Part 2) of this series we reviewed the demographic and business undercurrents acting as the catalysts for a fundamental shift in the way that Finance and Accounting (F&A) personnel will be performing their jobs in the very near future. In this third blog, we will touch on the technology enablers which are providing the framework for this re-alignment as well as a glimpse at what the role of F&A professionals may look like as this transformation begins to hit its stride.
Much as we discovered in our assessment of the business dynamics, the enabling technology so crucial to this accelerating change also consists of two complimentary meta-trends.
I. The Emergence of “Utility” Computing:
As is often the case, people bear witness to historic moments without really appreciating their significance until well, after the moment has passed. For me, such an event occurred in two consecutive early evenings on February 14 and 15th, 2011.
It was on these evenings, during the television game-show “Jeopardy” that two of the most successful champions in the history of the well-regarded broadcast matched wits against IBMs, “Watson” Deep QA System. “Watson”, as the system is generally referred to, is a massive system composed of 750 servers, 2800 processing cores and 16 terabytes of Random Access Memory (RAM). Aside from the sheer power which the 2011 version of Watson brought to bear, it also represented a spectacular breakthrough in terms of the programming surrounding natural language search, machine learning and information retrieval. In fact, Watson is capable of accessing the equivalent of more than 200 million pages of data, through the use of over 100 analysis algorithms, parsing structured and unstructured data at blinding speed. Suffice it to say that over the course of those two evenings, Watson fairly obliterated the two human champions, taking a $1Million prize in the process. What many, including myself, did not know was that we had just witnessed the first step in the unfolding of what we now know as Robotic Process Transformation. Watson’s legacy may be seen today in some of the early “personal assistance products”, such as Amazon’s “Alexa”, Microsoft’s “Cortana” and Google’s unimaginatively named “Google” device. All that said, Watson’s triumphs are not without some very interesting circumstances which we will return to a bit later in this piece.
Since those days in 2011, the march towards ever available and ever more powerful compute has accelerated and, thanks to Moore’s Law, will continue to advance at a breath-taking pace. However, this latest round of development has also been supported by the explosion of inexpensive, always-on cloud technology. Led by the likes of Amazon Web Services (AWS), Microsoft Azure and others, companies may now access massive amounts of compute at rates that which in the past they could only dream of. In many respects, compute is now regarded as akin to a “utility” with the ability to scale usage as need dictates. In fact, in 2000 Marc Andreessen, the legendary co-founder of Netscape and the world’s first cloud-computing company, LoudCloud, remarked, “enterprises building their own data centers and server farms are akin to factories today building their own power plants – why would you do it?” The commoditization of compute is now complete and it makes wide-spread, sophisticated RPA accessible and affordable for the first time in history.
II. The Evolution of Finance and Accounting Robotic Process Algorithms:
One of the primary reasons that Finance and Accounting is so obviously suitable for RPA is the generally structured nature of a large percentage of the work presently performed by human accountants. General Accounting, for the most part, tends to be predictable and rules-driven. Furthermore, the data itself tends to be organized neatly in rows and columns, making this circumstance a nearly perfect use-case for the application of rules-based analysis to help support the human accountant. In fact, power-users of Microsoft Excel will know that simple Boolean logic is available to any well-trained Excel savant who knows where to look. Boolean Logic consists of familiar conditional language such as:
If “x” condition exists; than perform “y” function.
As circumstances evolve, these logical rules may become very sophisticated and complex. However, the challenge has always been, how do we institutionalize this sort of logic and apply it on a broad scale in order to reduce the time and effort associated with repeatable processes? The aforementioned ubiquity of scalable cloud computing as well as innovative software vendors are now making this notion of standardized RPA rule sets a reality. In fact, early adopters of RPA rule-sets in Finance and Accounting can point to some truly stunning results. Reductions in time to close, error rates and, perhaps most germane to this series, the need to hire, are commonplace. Consider that a large multi-national bank in Australia, presenting at a 2015 Shared Service and Outsourcing Network (SSON) event, estimated that their initial investments in high-volume RPA technology provided the equivalent of 100 full-time employees!
As the adoption of Boolean-based rule sets continues, and as they weave their way further into the fabric of F&A, the next logical question is, “what’s next?” For that, we will turn our attention to handling the inevitable circumstances where our Boolean rules do not pick up a given occurrence. At present, our only recourse is to engage our human F&A professionals in order to “manage the exceptions”. Perhaps that is not so bad… when we consider our earlier example of Jeopardy and Watson. On the final question in the 2011 Jeopardy event, Watson badly mangled a response to a question that would be fairly obvious to a human. So, suffice it to say there is still no equivalent to the human brain… not yet, at least.
Conventional thinking is that the next phase of RPA in F&A will move beyond the well-defined rule sets of Boolean logic and will be squarely focused on how to manage those circumstances where an exception to the rule exists. Exception management is particularly well-suited for conditional-probability based algorithms. Notably, Bayes famous theorem. Succinctly put, Bayes’ theorem outlines the following:
If condition “x” exists; what is the probability that condition “y” also exists.
One can easily see that alongside the use of rules-based logic, the application of conditional probabilities would substantially improve the classification and resolution of exceptions, again reducing the time and expense associated with having to manage these processes manually. Consider the following circumstance:
- We have an account that under normal circumstances has a balance of say $5,000, but during a given period it has a balance of $5,000,000.
Any well-trained accountant would note that this is a circumstance that would certainly warrant investigation and explanation. Naturally, we have a Boolean rule-set that acts as a detective control. Perhaps something that states:
- If an account varies by more than 10% from one period to the next, then change the reconciliation date and add an additional layer of approval.
The accountant would now be proactively alerted to the notion that our fictional account is displaying unusual conditions as opposed to having to “discover” it during their normal course of work
However, we have another tool in our RPA kit, that being a conditional probability based algorithm, making use of Bayes theorem. Something akin to the following:
- Given that this account has a variance of over 10%, the probability that the following conditions exist are: x% for a miss-posting, y% for an incident of fraud, etc…
Such a circumstance would, at a minimum, direct the human accountant to the most likely source of the exception, again saving time and reducing errors. But it is entirely plausible that we could establish rule-sets and probability models which would allow us to “classify” exceptions with high degrees of precision and do so without any human intervention. An approach of this nature could allow us to “train” the system in advance by using historical data in order to test the validity of our assumptions. The result being a virtuous loop, where our model is continuously improving, thereby supporting the quality of our work. Concurrently, the amount of manual labor required to deliver high-quality and accurate financial statements would decrease, resulting in a more efficient process. All the while our costs are well under control, and we can scale simply by refining our model and adding compute. Is it any wonder that such a scenario would be viewed as “inevitable” by business?
So, there it is. Over the past three blogs, we have reviewed the changing demography and the necessity to replace our tenured F&A personnel as they exit the workforce. We have examined the immediate and ongoing challenges of scale, when we have already achieved the one-time labor savings associated with offshoring. Finally, we have considered the enabling infrastructure of utility-like computing and the emergence of centralized rules-based and probability driven models as part of this future vision for Finance and Accounting. Taken in their totality these three converging “fronts”, truly represent the metaphorical “Perfect Storm”.
Written by: Ben Cornforth