Accountants and Boards – Prepare for the Intangible/Digital Asset Arms Race
Lately, I’ve been talking to a lot of chartered accountants, auditors and lawyers and there is generally a big problem with traditional businesses “inventing” new assets classes out of pre-existing data assets, and landing them on the books for the first time under IAS38 INTANGIBLE ASSETS or in my country AASB 138. IAS WTF ? AASB ? OMG.
This is mind-bending intellectual territory for a me as a system programmer, and even more so for non-data aware business leaders who I am trying to convince that their data is lazy and needs to produce more value.
Here’s how one of those conversations went. Badly.
A board director of a $33b gas pipeline company asked me “why would we ever put something we can’t see on our books ? We’ve not had data on our books before, so why now ?”. I made the comment about Snapchat floating for the same value as his pipelines, and that’s where the discussion pretty much ended. He made some derogatory remark about the $33bn float of Snapchat “having no real value”. I walked away reviewing my skills (or lack of) in persuasion.
Our Mr Pipeline is a typical “tangible-minded” guy, but also very typical of non-data savvy and risk averse leaders in legacy markets who are reluctant to see the shifting moment of opportunity to re-value their data. His advice to me was to bring up Data Assets and Data Valuation via the Audit Committee, so the conversation was still instructive and not a dead loss.
On the topic of communicting with business leaders, I am reminded that we (the data industry) need better language to describe these intangible data assets in a way that resonates with traditional business leaders. At this point I need an expert like James Price from Experience Matters who will help me with Mr Pipeline. In the meantime I suspect Snapchat hasn’t missed a beat.
However, back in the real world, annual reports are increasingly introducing or re-stating intangible and digital assets.
OceanTomo and EverEdge Global are the IP Law experts I follow closely in the space of Intellectual Property and Intangible Asset Value. Although they don’t have the deep Data Engineering background, they have pegged the intangible assets arms race accurately and help clients unlock more value from patents, brands, logo’s, trade secrets, and yes… data.
I am predicting some challenging board conversations in 2017/19 as data value and data assets get reset on the balance sheet, and plenty of busy-work for Audit Committees and lucrative work for auditors.
What does a Data Consumer produce with all this data ?
We’ve successfully approached the Data Value and Data Monetization challenge from a different angle (still useful and I think quite pioneering), which is to link the organizational activities and people effort (think “people & process”) to information usage, and calculating what happens in the business when “data goes bad”.
Thinking first about what additional value gets produced by the Data Consumer. This means viewing data in a lifecycle, or a value chain, where you and your smart data team produce a data product for a customer.
We analyze this information to get a quantum on business value output, and use that quantum to drive improvements (investments) in Data Architectures and Data Capabilities that can shift the business value needle further in the positive direction to recapture the commercial damage caused by bad data. We learned this magic “outside-in” approach from a nugget we learned from Jack Olsen’s tectonic data quality writings.
If 20% is the Average Workforce Damage due to Bad Data… so what’s “good” data worth or all our data worth ?
In one multi-year case study, a client used our method in 2015 and discovered 24% of it’s workforce could NOT perform their primary job function directly due to (some form of) bad data.
In analyzing these results, we were able work out which Data Capabilities were absent, or of low maturity, contributing to the value damage. Our process automatically produced the commercial roadmap of Data Capabilities to invest in, and unlock more value. That semi-automated measurement activity took six days across 60 business leaders and all business units.
In 2016 when the same client re-ran our process again (across the entire company), they found the “data drag” on the organization was only 10.1%, an effective productivity gain of 13.9% of the workforce directly due and attributable to enabling our recommended Data Maturity enhancements that clawed back the commercial damage. That fully automated measurement activity took 10 days across 700 globally connected staff in 2 continents.
Coincidently, in simple terms, this client is like an Uber DOT COM, in the sense that they simply data match products to customers. They also buy and sell datasets which is more typical of how people view the “Data Monetization” topic today.
In our other case studies, a capital market bank’s data damage was 23% of workforce per annum. An energy retailers data damage was 13% of workforce per annum. Along the way, in various projects our process and tool also uncovered:
- How to accurately predict staff attrition,
- Detected lost sales,
- Revealed fraud,
- Isolated revenue leakage,
- Revealed payroll anomalies
- Unacceptable levels of customer atrophy in several business lines
….and much more. You get the point.
These are all cool by-products of clever data analytics, and asking the Data Consumers about their information usage. I refer to this as “eating our own dogfood” , in other words, using data-to-improve-data-value.
Pure Data Monetization vs the broader “Data Value” concept
Another client is in the livestock industry and owns national live auction software which manages about $5bn+ in sales throughput annually.
The data created is of immense value to governments (global and domestic), investors (global and domestic), feed stock suppliers(domestic), farm equipment suppliers (domestic) and cattle futures traders (global) to name a few.
This was a very traditional rural-cultured business, now looking at harvesting and monetizing it’s exhaust data. I used our 2016 Data Monetization Canvas to recast that business model, and add “data” as a new revenue stream, or a new way to “milk more revenue” from their business model.
This second case study is probably closer to the traditional “Data Monetization” narrative that the industry thinks of, but the larger concept of “Data Value” can include measuring and increasing the value output of:
- Data Assets – (the data asset or data asset portfolio) and,
- Dataset(s) – (the raw data itself) and,
- Data Capabilities – (the know how, capability, maturity, design, governance, architecture, algorithms);
…which I think further expands the way we can think of “Data Value”, when a simplistic notion of Data Monetization might only focus on selling the dataset(s) in some format.
From first hand experience, mature Data Monetization firms like Bloombergs, Reuters, Standard&Poor’s, Dun&Bradstreet, Nielsen etc, have very mature Data Governance (know how) and Data Architecture (know how) regimes to produce their data products like market rates, news feeds, etc.
I suspect those working in the “Infonomics” space may share some sentiment here in relation to Data Engineering/Architecture/Disciplines (Data Value from the Data Know How, not just the raw data).
The Data Consumer Is King. Worship the King.
Whoever Data Worships the Data Consumer (the best) Wins. This means listen to, understand and supply trusted data to them, when and how they want it supplied to them, at the right price. – Martin Spratt
There are some key principles that drive this paradigm for me, one of which is, the Data Consumer is king.
In the data supply chain, the Data Consumer is the final arbiter of “value”. The decide:
- What information they want/need ?
- What they could they do if they had more data ?
- What happens when they don’t get the data they need ?
- What data “and/or data services” they willing to pay/pay more for ?
Therefore interacting, exploring and eliciting information from Data Consumers is a rich frontier for understanding and unlocking Data Monetization opportunities and Data Value.
I’ve based this on the 40 years of Data Quality work from DQ genius Tom Redman, and augmented his work with data revelations from another Data Quality thought leader Jack Olsen, who first defined the concept of “Outside-In Data Quality”,
These ideas are what make our client outcomes and our software so simple, so fast and so powerful in discovering and uplifting the value of data.
Author: Martin Spratt, 19 Jan 2017. Martin Spratt is a data value guru, author and CDO advisor, held hostage in Melbourne by 4 women and a cat, and survives on cappuccinos. This article first appeared on ClearDQ.com