At the end or at the beginning?
Advances in technology have already transformed the finance sector, but they are now changing our approach to standards
Professor Trevor Harris is The Arthur J. Samberg Professor of Professional Practice at Columbia Business School and co-director of its Centre for Excellence in Accounting and Security Analysis. In a career that has bridged academia and industry, Harris has headed the global valuation and accounting team at Morgan Stanley, where he led the firm’s ModelWare project and has served on the Standards Advisory Council to the International Accounting Standards Board, the Users’ Advisory Council to the Financial Accounting Standards Board and was a member of the International Capital Markets Advisory Committee at the New York Stock Exchange. Having been involved at the early stages of XBRL, Professor Harris spoke to Sibos Issues about some of the challenges, both conceptual and practical, in standardising financial data.
To what extent have we reached consensus when discussing standards in relation to financial transactions?
When we talk about standards at a broad conceptual level, there may be consensus. But in the practical application and where standardisation is thought to be of use, it’s all over the place. In many instances, people do not really understand the stated use case. They have different perceptions about what they’re actually doing it for and what these uses require. XBRL proponents argued from the outset that by standardising and making it machine-readable, we would make financial data cheap, quick and easy to use for all investors. It should give you a formalised taxonomy for a set of financial metrics or transactions, but the implementation has not achieved this. Some proponents now argue XBRL is for data aggregators as it is too complex for most investors. But it is worse than that, because the process of standardisation has been captured by those who do not keep the use case at the forefront. For example, the International Accounting Standards Board (FASB) and the Financial Accounting Standards Board aren’t getting close to agreeing on pretty basic stuff, with the latter rejecting the former’s taxonomy over differing definitions like ‘revenue’ versus ‘revenues’. No user would care but it makes standardisation almost farcical.
At Morgan Stanley, I created ‘ModelWare’, a framework for transforming company data into meaningful, comparable metrics. Clients wanted to do some global comparative analysis and different people had come up with different ways to do that. Data service providers and investment houses interpreted and standardised underlying financial data in their own way, but deviating from each other. When it comes to standardising any financial data other than at transactional level, you have to start making judgments, and if the definition is not representational of the underlying transaction the output is meaningless.
With ModelWare, we standardised at the lowest data point we could get as external users and then created a set of calculations that utilised the tagged source data, so metrics could be easily redefined by users. Take ‘operating income’, a popular metric often described as earnings before interest and tax. People often also adjust for amortisation and in some cases measure it before depreciation too. When the US stopped amortising goodwill, but was still amortising certain intangibles and the rest of the world was in a different space, what was the right term to use? We ended up creating our own measure which avoided the ambiguity, ‘pre-tax operating profit’. Analysts and clients were in many ways interpreting what to include in ‘operating income’ themselves.
The way people classically think about standardisation involves doing it at the true node or primitive data level. Legal entity identifiers (LEIs) are an interesting example, because the legal entity is a true primitive in every contract. There is no requirement for interpretation, just creation and rigorous use of an identifier. Each bank has been expending a lot of effort, money and system time trying to make sure that, even internally, they know their own exposures to a single counterparty. Bringing global standardisation at that level is critical in some ways and can be cost effective. The key is that this standardisation happens at the root. In the US, the Financial Instruments Business Ontology (FIBO) initiative is trying to develop a taxonomy of descriptions in a tree-like form that will define every financial instrument based on its primitives. For one set of instruments, that’s taken two years to try to develop. Others might come more swiftly. They may get to relatively standardised definitions, because they are starting at the most primitive. But even if they get agreement on the definitions, putting them into the systems of the banks in a way that they can actually use them is not only enormously expensive, but internal politics can make it very difficult, sans regulation.
Does regulation help or hinder such standards initiatives?
I think it’s a bit of both. Regulatory imperatives get people to focus on the issues and try to come up with better solutions. That’s different from the regulators trying to give the answers. At Morgan Stanley, we were monitoring and working with the XBRL developers, but were creating our own XML tags for our own purposes. At some point, the XBRL consortium took over many of the tags as they evolved the taxonomy. Then the Securities and Exchange Commission (SEC) mandated XBRL for registrants and as the focus was the financial report rather than the source data, there was a push for FASB to take over the taxonomy. XBRL has proved cumbersome, technologically outdated and to date not useful for investors, as far as we can tell. So regulation can be helpful in requiring appropriate standardisation, but that’s very different to defining where and how the standards are implemented.
Regulation can be very constructive, as long as it gets out of the way of trying to prescribe. The regulators themselves shouldn’t be expected to understand all the details; this is really hard stuff.
Does regulation help from the perspective of making funds available within financial institutions to pursue standardisation projects?
Maybe, but I think it’s more complicated than that. Funding isn’t the real barrier, because firms are spending a fortune on this reference data anyway. Appropriate data standards would save them money. There is, however, a change that has to take place that will incur a one-time cost and it’s only worthwhile to the extent that everybody else is doing it. In these cases, regulation becomes critical. The problem is, how do you handle enforcement?
What lessons does XBRL have for other standardisation initiatives?
If you are going to do standardisation of data, you better get data scientists involved. The rate of change in data science is so fast that you risk getting stuck in an old technology, especially if it’s taking time to implement.
Another mistake that XBRL made in the context of financial reporting is that the SEC created the mandate to tag the financial statements as they exist. That’s the wrong place. The taxonomies and the standardisation are happening at too high a level. It goes back to my EBITDA example. Once you’re at that level, everyone’s got their own interpretation. You then have ‘zillions’ of extensions because you’re trying to cater for idiosyncrasies. By definition if you go too far in that direction, there is no standardisation.
What’s the most efficient way of handling inevitable differences in interpretation across institutions so it doesn’t impede the use of the standard itself?
The first option is if you can standardise at the primitive – and by that I mean the most primitive. If you’ve got the primitives standardised then people can create their own aggregation rules so long as the inputs are consistent. LEIs are a good example.
However, technology has evolved to the point where the standardisation may be better done at the use level: at the end rather than at the beginning. Today you can write the taxonomies to suit your needs and then suck in the data to fit into that.
The starkest example that I know is a company called Dataminr. They get a pipe of all the data from Twitter constantly and standardise those tweets to get news signals from them. You can put in anything you want in a tweet within the character limit. But you can learn consistencies and then write the standardisation around that. That’s essentially what a search engine does. When you think about how much information is out there today and the relative consistency in a lot of financial information, you have to ask yourself why we have to do this at the source, especially in something like financial reporting. What I teach my students today is, if you want to analyse a business, there’s less and less useful information in financial statements. There’s so much information outside. If you have the right framework within which to aggregate it, you are not restricted to the statements.