When someone says ‘we do analysis’ you should immediately start looking for the hidden camera, because ‘data analysis’ means different things to different people.
What’s more, an erosion of core statistical analysis skills has taken place. This means there is a very high probability (based on our own ‘not so’ statistical model) that when someone says that they ‘do analysis’ – this may not necessarily be the type or quality you were hoping for.
Isn’t it ironic, don’t you think…? that just as we enter this new world of Real-Time Customer Intelligence, the very skills needed to make sense of all that ‘Big Data’ have been quietly eroding away? Some may even argue that this erosion has been accelerated by the very channels that now seek to uncover the ‘Big Answers’ from their multi-channel activities.
Data has become a bit ‘Geeky Cool’ and with everyone claiming to ‘do analysis’ we’ve begun to see some truly horrific examples where bad analysis has inflicted some serious damage to business.
Of course analysis is obviously a wide term and can also include:
- Summing rows and columns in Excel
- Extraction of reports from Web Analytics and Email Broadcast engines
- Query and drilldown numbers from B.I and M.I.S tools
All the activities above are massively important, they underpin much of the decision making processes that businesses need on a daily basis. However, the main statistical technique in use here has actually been around for a little while. It’s called ‘counting’.
The type of analysis we are focused on here at R-cubed are the deep statistical analysis used to make sense of customer behaviour, predict future value and optimise communications.
We now have access to a wealth of data. We not only see what a customer buys, but when they came to view, what they saw and which products they looked at (and rejected) before they bought. They tell us when they have some interest in us, even if they have not bought for some time, and they tell us what brought them to our site.
Smart analysis has always been key. Even before the digital revolution (where data was much more limited than it is today) analysts were rigorous in their testing; they built smart models that could be validated and (more importantly) delivered significant and demonstrable incremental returns from their work.
Today, good analysts should be doing what they’ve always done, taking the data available and harnessing the maximum power from it whilst building understanding of customers, households, trigger events, propensities.
They should be ‘rolling the data back’ (time-travel for data scientists) to understand which customer attributes and behaviour has led to specific results, removing any risk of self-fulfilling ‘insight’.
So surely, now that we all have access to all this ‘big’ data, analysts should have an accelerated capability, right? Yes, they should… but much of the time this simply isn’t the case. So why are so many organisations struggling to get the quality of advice they need.