About Us

P2 Consulting was started by a group of award winning consultants who recognised the opportunity to build a global consultancy firm that had clients’ needs at its heart. We understand the challenges clients face – the pace of globalisation, technology change and increasing regulation – and the pressure they are under to respond to these changes rapidly and efficiently.

What We Do

We work on some of the largest transformation programmes in the corporate world and the public sector. Partnering closely with our clients, we help them deliver successful business change. Our reputation as a consultancy is built on excellence in portfolio and programme management, business architecture and design, testing and quality assurance and implementation.

Insights

Understanding the challenges that keep our clients awake at night is essential. In this section we demonstrate our expertise at solving your problems. We have deep insight into the business and technology issues facing all sectors.

Join Us

Are you looking to join a company where a challenge is welcomed, change is anticipated, and expertise is expected? Then have a look at our job listing and please get in touch.

Case Studies

We’ve worked with clients across a range of sectors and gained excellent results – but don’t just take our word for it. Have a browse through some of the work we’ve done.

Seeing the wood for the trees: Predictive Analytics, Random Forest Analysis and Digital Transformation

By Adam Skinner, Head of PMO and Portfolio Management Practice at P2 Consulting and Richard Rickards, Principal Consultant – Analytics for Transformation Lead

02.08.18

We are endlessly fascinated by the way fashions and fads seem to sweep through all professions – and the change management profession is no different. At the moment we, at P2, are having regular conversations with our clients about the application of predictive analytics to the delivery of transformation. What’s interesting about this is that, in terms of the fundamentals, there’s nothing particularly new about predictive analytics that should make it so current. The logic being applied works on the assumption that what has caused a particular effect in the past is likely to cause a similar effect in the future which (despite good intellectual caution that the past is not necessarily a guide to the future) is often true in the short to medium term. What we think of as experience is essentially an individual who has operated in a particular environment long enough to unconsciously spot patterns in the information that have, in the past, signalled where there is trouble ahead.

Most predictive analytics works on this concept scaled up. Let’s take for example the application of this concept to credit card fraud, as outlined at figure 1 below, and compare this with the challenge of identifying biased status reports in change programmes:

1. We identify a decision point (i.e. a credit card payment) and a validation criteria (i.e. it was valid or fraudulent)
2. We identify the factors that have a relationship to this decision point (where those payments are made from, what time of day, which products, how many etc.)
3. We use a tool or algorithm to look at many past examples of that decision, and work out which factors imply a valid decision, and an invalid decision (i.e. global credit card payments over the last year)
4. We then apply that learning to future decisions (and the factors leading into that decision) to flag where validation of that decision might be at threat (in the example above to flag which payments might be fraudulent and bring it to the attention of the fraud team)

Seeing the wood for the trees: Predictive Analytics, Random Forest Analysis and Digital Transformation: figure one

This type of approach has been fairly standard practice in the financial sector for a number of years with extremely positive results. The basic logic of the approach as applied to the decisions taken by large scale permanent organisations should be just as applicable for temporary organisations like change programmes. For example, the logic used to spot a potentially fraudulent credit card payment should be applicable to spot a potentially unrealistic milestone RAG or resource forecast. However this sound theoretical approach has yet to be widely adopted in the transformation space for a number of reasons:

a) The computing power and tools have not been freely available ‘out of the box’ or at the price point most programmes work at.
b) Algorithms developed as far back as the 1980s and 1990s, such as Random Forest, have taken time to achieve widespread application by data scientists.
c) The perennial problem of ensuring the data is standardised and of sufficiently high quality to make accurate decisions.
d) Fast moving changes in the business space and operational context need to be reflected in the data for it to remain relevant
e) Traditionally the data sets have needed to be restrictively large to achieve predictive accuracy.
f) The change community (beyond the mega-project construction industry) simply hasn’t seen it as an available tool and so hasn’t looked to apply it!

So, why is it such an exciting topic now? Well, happily, the computing power and tools (for example open source software, development tools and affordable memory) are highly accessible on-premises these days. Also, that these can be pulled down from the Cloud as a service at a fraction of the cost and timescale of an on-site permanent solution. Open source tools and community-based learning (e.g. Massively Open Online Courses (MOOCs) have driven the growth and application of data science skills and the change community is growing more and more interested in the possibilities. But that still leaves the problem of data standardisation and the simple fact that most programmes don’t have a large enough data-set to make traditional predictive techniques work.

And that’s where machine learning approaches such as Random Forest come in, which are more powerful, for example, than traditional predictive techniques such as regression analysis. Random Forest can reveal more complex interdependencies between the factors that influence outcomes. Computing power reduces the cost and time of working through the ‘vast forests’ of possible interdependencies, including the sorts of complex non-linear relationships that are more common when working on digital change programmes. It’s also able to apply ‘weight’ to these individual factors providing far more nuance to the approach. All this can mean that high levels of predictive accuracy become possible with relatively small datasets e.g. such as a few weeks or months of status reports for a series of programmes and projects within a portfolio. Random Forest can therefore be used on medium-sized digital transformation programmes after only a few weeks of implementation rather than being just the preserve of long-term national infrastructure mega-projects.

Recapping our cycle from the credit card fraud example earlier, when we step back we see that what is going on can be summarised by a few simple steps:

1. Establish data set
2. Set up rules to identify normal/abnormal behaviour
3. Act upon a trigger of abnormal behaviour
4. Learn and update data and rules

We can apply this process to transformation programmes and status reporting as outlined at Figure 2. Only here the data are for status report data (red amber green ratings) rather than credit card transactions, and we are talking about which project managers and previous status reports and other project management data such as risk ratings and status, rather than what was purchased on a credit card, when, where and how much.

Seeing the wood for the trees: Predictive Analytics, Random Forest Analysis and Digital Transformation figure 2

The power of this approach has big potential benefits:

• Close gaps between the SRO’s expectations, and the quality of status management by staff
• Identify programme issues, delays, and risk impacts, weeks before they impact
• Take corrective action now, rather than when it is too late to steer back on course
• Identify which issues are the root cause of waste and delays, bringing consistent delivery
• Adaptive learning and optimisation of the PMO to eliminate the recurrence of problems
• Ultimately achieve programme success more quickly and at lower cost• Continuous assurance is engineered into the PMO, reducing the overheads associated with “deep dives” and external audit.

The required investment is small in comparison to the potential benefits. For example, resources in the form of experienced PMO expertise, PMO analyst, and expert input on predictive analytics. The relevant algorithms are implemented using open source software (e.g. Scikit-learn in Python) and can be run on 1 or 2 desktop PCs of good specification or very securely in the cloud. Now, as with much analytics, this comes with the health warning that the model output still requires human interpretation before action is taken – nothing takes the place of experienced Project/Programme and PMO Managers with a grip of the context. But as an early warning of potential areas of concern and focus this is an extraordinarily powerful tool that’s now in the reach of all PMOs and Programme Managers. It can also spot patterns in the apparent noise which the human eye may easily miss. We are hugely excited about the potential of these new tools and we hope you are too.

To find out more about how this evolution in analytical power is being used to the full benefit of change and transformation programmes, please email Adam Skinner or Richard Rickards, or call +44 (0) 20 7099 0803 today.