Exclusive portal designed for our Customers and Partners

Log in

Request for Access

countries flags

Blog post series: Plan your predictive analytics and machine learning journey with CCH Tagetik - Part 3

Aug. 25 2020 by Prof. Dr. Karsten Oehler, Solution Architect - CCH Tagetik DACH / Marco Van der Kooij, Managing Director - ForSight Consulting

Performance Management Business Intelligence & Analytics

Planning is more than prediction! It needs decisions about activities, resources, investments etc. And even in highly automated environments, planning involves manually-entered data, because planning is about choice, not merely a reaction to what’s gone before.

In this article we will deep dive you discovering how you can plan your journey into predictive analytics and machine learning. After talking about how machine learning and predictive analytics can support Finance, and discovered how Forecast planning for P&L and other essentials can play a pivotal role in your business plan. Today, we want to talk about Contributor Analytics meaning how machine learning can help identify outliers in the data which validation might miss.

Many factors can introduce variability: complexity means decision making has to be delegated; targets get stretched to provide motivation; and planning commitments have to be consistent with expectations. So planning quality is about being consistent in your delegated planning decisions.

A critical factor for your planning quality is validation. CCH Tagetik offers validation functions which can detect data entry errors. This will pick up missing data and exceeding thresholds for example, or perhaps a sudden dramatic increase in sales costs, best detected directly after manual entry.

But what about more complex outliers? Here’s an example: A moderate sales decrease might be unremarkable in itself, but if competitors are seeing sales increase at the same time, it could be a significant outlier. Another example: The market climate is expected to improve but the sales rep’s prediction is pessimistic. This looks inconsistent but could be explained by a reduced sales force or the strength of your competition.

You get the picture. So what are the typical problems with manual or oversimplified validation?

Inconsistencies hidden in contributed data. There is usually too much data and it is hard to have an efficient way to find them. An aggregated view, used to reduce complexity, often balances out (or hides) detailed problems.

Limited mutual understanding in contributor processes. Planning behavior such as sandbagging, etc. remains undiscovered.

The following graphic illustrates typical problems with contribution quality:

 predictive blog img1

Figure 1: Possible Inconsistencies

The problems on the left side are easy to detect, more or less. But how do you find contradictions or conflicts of objectives? How do you detect overconfidence bias or a hockey-stick effect? These are typical patterns which can only be detected with sophisticated time and peer pattern comparison.

All these situations could be assessed as inconsistent and, in theory, found by taking a walk through the reported data. But there is usually too much data to investigate, and past data must be available for comparison, too.

So why not use machine learning to detect potential inconsistencies? Methods traditionally used for identifying fraud and transactional data are ideal for detecting suspicious patterns and checking contributed data quality. The contributed data can be compared with peers, historical data, external data, etc. to check the quality. These methods belong to unsupervised machine learning and are known as outlier detection. And it could be very helpful for companies or groups with distributed/decentralized planning and reporting functions.

It’s important to point out that this is only technical outlier detection. The results reveal exceptions but not necessarily inconsistencies. There may be an explanation for a detected outlier. A sudden cost increase can be explained for example, by structural change. You still need human assessment and communication to identify them as inconsistent, which is why outlier detection is only the first step.

Have a look at the following interaction between contributor and manager, where reasons for the outliers must be documented and exchanged. Machine learning algorithms are able to adapt results to improve consistency checks for the succeeding planning rounds.

How is contributor analytics supported by CCH Tagetik

  • As per usual, the contributors work with CCH Tagetik data entry forms, taken the given target expectations by management and often an additional automated forecast on drivers. For instance sales volume and costs have to be planned for the next year. The contributors, in turn, include their expectations and ideas and finally submit their planning version.
  • This data is automatically analyzed by CCH Tagetik data processing using machine learning methods such as k-means or Benford, which will provide a list of outliers. These methods compare recent contributions with historical data, peer data and maybe external figures. For instance a certain sales manager provides a significantly higher cost / sales ratio in comparison to his peers. Furthermore he missed to adapt the management decision to increase the focus on certain new products (in comparison to his peers).
  • The manager gets a CCH Tagetik Report with list of potential problems and walks through the prioritized list of outliers. Maybe some comments from the planner already explain the exception(s). But if the outlier is not explainable, the manager can add comments and ask for additional information or corrections. This assessment is sent back to the planner.
  • The planners are notified and get a CCH Tagetik Report with all the items to be discussed. They now decide whether to adjust the data or provide a reason for the exception and can resubmit the data.

The process can be repeated, if the adjustment still doesn’t satisfy the manager.

predictive blog img2 

Figure 2: Contributor Analytics Process

The algorithm can learn from all the information stemming from this process. It is continuously trained about patterns and possible reactions and provides information about similar occasions and automatically classifies future outliers into meaningful categories.

Given this machine learning support companies can benefit from:

  • Increased planning quality by detecting inconsistencies and controlled adjustments.
  • Opening up discussions to avoid adverse behavior like sandbagging, etc.
  • Better mutual understanding about targets, expectations and limitations.

To discover more download the white paper "Machine Learning for Controllers. Use cases for Forecasting, Planning, and Simulation" click here.

Share this post!