Wednesday 23 January 2019

Fixing XBRL to Build a Set of Investment Grade Financials in which you can have Total Confidence

So as promised, here's how you fix the XBRL to get your very own set of investment grade financials. See my last post as to why you need to do this.

This of course is also a sales pitch. We have a set of tools that we think does just that.

I've chosen a very simple single ratio model so you can see exactly what's going on and how easy it is to improve on the XBRL tagging filed by companies. I'm using our fully featured totaliZd product to do this and you can download both a before and after X Sheet to see the impact of these tools.

So lets say I'm a strong believer that research & development drives future growth. How does my target company stack up against its competitors?

Well I don't know, because as a result of extensions (and standard totals missing from as reported filed financials - a common problem), my model in Excel is currently a mess.

Now to give this analysis some credence, I've chosen the first company with a significant extension issue off our previous list of the DJ30 companies and found four of its competitors against which to compare. And I'm getting not a single R&D figure, coming through from the XBRL on the face of the financial statements, for any of them!

This is not uncommon behaviour for a peer group of companies. They watch each others disclosures and follow suit. There is even a button in the leading brands of XBRL creation software to allow you to do this! This is good and bad. Bad that it happens in the first place. Good in that it makes it easier to fix - we can quickly roll out the same change across the whole group.

Now at this point I would recommend you watch the video to see how to do this and how quickly it can be done. I was able to make the change, from model to filing and back again, for 4 companies in under 30 seconds.

totaliZd allows for layers of tagging so we never change the sacred as reported numbers and tags, we just build layers on top until we get exactly what we want, a view of the company that truly reflects what we wish to see. This way we can always get back to the genuine source figures. So we pick up the tag we want from our handy set of X Financials (broad set of the most commonly used set of values and adjustments used in modelling companies) and plonk it in the User Tag column. Job done. And when you have finished, this is what you will end up with.

So continuing our narrative, 3M looks like a poor performer compared to Corning, so based on our (very) simple and crude analysis, we should instead back Corning to generate greater revenues in the future.

Now we have only changed one tag here. And we can leave it at that if that's all we want to look at. But what other changes might we have to make, if we wanted to fully prepare 3M, to create a full set of VAXBRL (Verified & Adjusted XBRL), ready for any analytical challenges we might want to throw at our target company? Well if you look at the X Sheet after adjustments, you will see I had to add only five other tags to turn dubious as reported XBRL into Verified & Adjusted XBRL.

As the name implies, part of the process is Verifying the accuracy and validity of the XBRL and this is what the totaliZd sheet enables you to do. One glance at this sheet tells you everything is tickety boo and good to go. More on this in a later post - suffice to say there is an awful lot of checking going on here (we instigate a triple check procedure that enables you to look at the XBRL from three angles).

totaliZd enables you to adjust the XBRL, not to change it, but improve on it to match your own modelling requirements. User Tagging creates a backstop so no matter what the company churns out as a filing, you can easily fix or improve on it. You can make as many or as few adjustments as you wish and they will all be reflected in the totaliZd validation. You are in control, enabling you to decide what values go into your models. This way you can create a set of Financials which surpass any that you might purchase from the likes of Capital IQ or Bloombergs. This is serious value for money.

So confident are we that it will change the way you analyse companies, we have decided to offer a 60 day free trial of totaliZd (no credit card details or prepayment required). Just email us at to take up this offer.

This post is part of series that started here and is a follow on to my last post - Why Verified and Adjusted XBRL is the Best Choice for Company Analysis

Thursday 17 January 2019

Why Verified and Adjusted XBRL is the Best Choice for Company Analysis

If you read my last post Do Extensions make XBRL Unusable out of the Box?,  you will know that using XBRL without adjusting the tagging could well be calamitous.

So what are your options? Well in the post before that Which Source of Company Financial Data should I use?, I set out the options for finding the best source for comparative analysis. Lets re-iterate them here but I'm also gonna add one further choice.

1. Lifting values straight from the Financial Reports filed by a company
2. Data Vendor - Bloomberg, Thomson Reuters, Capital IQ, FactSet etc
3. XBRL (unadjusted)
4. Verified and Adjusted XBRL (VAXBRL)

The first choice is only an option if you have ridiculous amounts of time. The second if you have ridiculous amounts of cash and you are willing to take the risk the data is completely error free and has been interpreted correctly. Vendor data by its nature has been handled by a third party so really needs to be verified if it is to form the basis of an expensive transaction. This is why it is never used as the primary source for in-depth company analysis in critical departments such as M&A.

My previous post explains the danger of using neat XBRL. Over half the Dow Jones 30 companies had used extensions in a way that could lead to significant errors in your analysis if you just plugged the raw XBRL into your model. Only 20% of companies had no extensions on the face of their Primary Financial Statements. And if the trend over the last five years is anything to go by (which I also examined in my analysis - I was interested to see what that fifth year of data might look like compared to the first), it not going to get any better.

There is a viable alternative. And it is not expensive. And by expensive I mean in terms of time, that most precious of commodities. VAXBRL. Just spend a little time verifying and adjusting the XBRL. This way you'll always know the provenance of the source (direct from the company). And with the right tools, it can become an easy and inherent part of your financial modelling process over which you will have complete control.

What you'll end of with is a set of financials better than any vendors at a fraction of the cost. Now that I think is what the XBRL revolution was supposed to be about.

Of course I wouldn't be telling you this if I didn't have a set of tools up my sleeve that might be just the job. Take a look at our totaliZd product. Download a totaliZd X Sheet at work (ready for adjustments). Yo can now also see a fully adjusted version of an X Sheet and video which I talk about in my next post.

Monday 14 January 2019

Does the current use of Extensions make XBRL unusable out of the box?

Yes. You can't just plug it in.

But does that make XBRL useless? No. It just means you'll need to make a few adjustments to the tagging. A small extra step in the process with the right tools. If you don't, if you insert raw XBRL numbers, the ones you can get from the SEC or XBRL.US straight into your models, your analysis will be flawed. I'm not saying it might be flawed - it WILL be flawed.

I say that because I've looked at the following table. Well actually I compiled it. It shows the incidence of extensions for all 25 of the non financial companies in the Dow Jones Index (DJ30) for the latest published XBRL income and balance sheet and was compiled using our X Sheet. As long as you only want to compare Apple against Intel you should be fine or the other lone troika of companies that have no extensions on the face of the primary financial statements. Otherwise you've got a problem.

You can download the full spreadsheet here.

I chose to look at the face of the two principal financial statements for the DJ30 because I wanted to look at the best case scenario. The most comparable values tagged by theoretically the best resourced companies, supported by the top audit firms and therefore capable of delivering pinpoint tagging. The tags I looked at of course represent a fraction of the tagged elements in an annual 10-K, and being the most commonly summarised amounts, the ones in theory least prone to being extended. I excluded the financial companies as they are a bit more peculiar so maybe even more liable to be extended. Like I said I wanted the optimum target result.

Just in case you are not familiar with extensions, an extended tag is a useless tag. That's being a bit harsh but it does make comparative analysis exceedingly difficult, without intervening to pull it back into the US-GAAP taxonomy (the list of standard tags). The idea is that companies can extend this standard taxonomy by adding their own bunch of tags on top, where items don't fit into the list. This undoubtedly legitimately occurs and this is part of the beauty of XBRL, that it allows for this. It wouldn't be XBRL (eXtensible Business Reporting Language) without it. Unfortunately, in reality, it is up to the company to decide when this is so.

And so ineffectual is this type of extended tagging, that when Europe joins the XBRL party next year, ESMA has specified that this type of disconnected extension will not be allowed. Every value will have a backstop connection to a tag in the IFRS taxonomy and if that does not fully describe the item in question, an extended alias maybe used as its primary tag.

So having said all that, I'm not sure we should be seeing so many extensions on the face of the financial statements. Surely they should pretty much all look like Apple. Exceptions that prove the rule not make a mockery of it. Anyway I'm not gonna dwell on that. It is what it is. A more pertinent question then is whether these extensions really matter and if so what we can do about the data we have in front of us. This is why the table is not just a simple count of extensions but an examination, albeit cursory, of the potential impact on meaningful company analysis.

We've had a lot disconcerting reports on extensions before but I wanted to make this inquiry as real world as possible. So the top level items and my subjective assessment of  the likely impact on KPI's or the ability to strip back figures for future forecasting. So some extensions I have deemed as insignificant, some likely to have a minor impact and others, which set off a big red flashing light as, major. So where a total, which can be calculated by combining its disclosed components using established taxonomy data validation rules, has been extended, this is deemed as insignificant but where restructuring charges have disappeared from the standard taxonomy by virtue of an extension, I have considered this a major problem. I have sometimes paid attention to the size of the amount but really it is not necessarily a reliable marker as for example in some years restructuring charges can be massive and others minuscule.

Anyway you can make your own judgement as I have made the spreadsheet containing the table available so you can use the X Sheet to see all the extensions for yourself by looking at the Filings tabs. Also by looking at the totaliZd tab, you can quickly see the scale of the impact on a comprehensive set of standard values designed for analysis (we call our version the X Financials). The scaling option, a feature of totaliZd, has deliberately been set to billions so you can more easily see the significance of any impact. A value of one could even be significant within or without its group at this level of magnification.

As the screen shot above shows, any amount in the square green boxes, is a problem that needs fixing. How you do this I cover in the next post.

What you see in totaliZd is as bad* as it can be. That is not a comment on the individual companies involved but the limitations of using neat XBRL. This is the point of totaliZd, to provide a starting point, from which you can quickly fix this apparent mess and leverage XBRL to perform the kind of analysis never previously possible before. I'm not dissing XBRL. I'm simply qualifying how you use it.

*Things get off to a very bad start for Exxon and Coca Cola as neither them supply a Revenues tag. This isn't even an extension problem. This is just a "not using enough us-gaap tags" problem. Problems like this automatically get fixed in the X Sheet when using our standiZd option.

This post could really be considered part of a series I started in 2017 - And so has the dream come true? to look at the state of XBRL disclosures and examine the continuing validity of the objections raised in using it.

Thursday 10 January 2019

Which source of Company Financial Data should I use for my model?

If we want to create a model of a company's future performance, we need a starting point, a set of inputs which we can examine, adjust and extrapolate our best estimate of how we anticipate the company will perform in the future.

So what are our choices?

1. Lifting values straight from the Financial Reports filed by a company
2. Data Vendor - Bloomberg, Thomson Reuters, Capital IQ, FactSet etc

I believe XBRL provides the best possible place to start. Why?

Well, as my old Physics teacher was never short of telling us, lets go back to first principles.

Our ideal starting point would be a perfectly accurate picture of how a company is performing right now. We can’t get this however for two reasons:

1. We don’t have real time access to companies accounting systems so we are constrained behind the curve by the reporting calendar.
2. We can only see the published external numbers, the numbers that the company allows us to see, subject of course to any legal disclosure requirements or the opinion of their auditors.

So even this, our best source, is inherently flawed, so we must be ready at all times to make adjustments/correct the figures put in front of us. Despite these caveats, the company still has to be our first port of call because only they have the closest and best view of the current operating performance.

So why would we use a data vendor?

Well their starting point is exactly the same but what they do is prepare the accounts for financial analysis. If you start from the Financial Reports filed by a company, preparing every single company this way is expensive, which is why they will charge you thousands of dollars for the privilege. Criticisms levelled at data vendors in the past have been that they don’t always get the figures right and that its not always clear how and what adjustments have been made.

Also a standardised approach can be problematic as the Corporate Finance Institute note:

“Companies such as Bloomberg, Capital IQ, and Thompson Reuters provide powerful databases of financial data. However, financial statements retrieved from these databases tend to be in a standardized format. Thus, if the company uses an accounting value unique to its business operations, you will not grasp it from data retrieved and it will affect your analysis.”

This is why in critical decision making, when an investment decision or M&A deal could be worth millions of dollars, vendor data would never be used as a starting point for a single entity centric model. Of course this data has value in screening and modelling whole markets or sectors but even here, for the reasons stated, they are potentially flawed inputs.

So what does XBRL bring to the party?

XBRL takes the effort out of lifting the financial values from the report and provides a first pass at standardization. Not normalization mind, but a first step in that process if your goal is vertical analysis against a company’s peers. As I discuss in this article, unfettered XBRL, as filed with the SEC and made available through Edgar or alternatively for a fee via the XBRL.US API, does not, nor indeed intend to, provide a perfect set of standardized values.

By harnessing the XBRL tagging, your models can be automatically derived from genuinely as reported values, the closest view of past operating performance direct from the company and untampered by any third parties. The hard graft of lifting values, monotonous, error prone and time consuming, is removed. But as I underlined above, this is not quite the finishing point. Most of the hard work is done but we must always be ready and prepared to make a few adjustments*. They will undoubtedly be required.

*How you can easily make these adjustments is discussed here and in the following video. If you want to read more about the need for adjustments in XBRL, then check out the next post in this series.

You can also read about totaliZd from Fundamental X here, our complete solution for preparing XBRL derived inputs to financial models in Excel.

Friday 4 January 2019

...taking control of an Excel Power Web Query is another thing entirely

This post is the follow up to Getting XBRL in Excel is dead easy with Power Query.

Now to gain control of a "legacy" web query, to dynamically add your own variable parameters, say input a ticker into an X Sheet, you had to feed the query into Excel via an iqy file (internet query - this tech was actually introduced by Microsoft way back in the last millennium).

You no longer have to mess around with peculiar files in Get & Transform but in a lot of respects its way more complicated. Because Power Query is about Tables, and its about Tables; did I mention its about Tables? Power Query thinks in Tables, all it outputs is Tables* and what you input better refer to a Table somewhere along the line or there will be no output.

*Well strictly speaking you can specify that it outputs part of a table so you can squeeze a solitary value out of it.

So you need to know about Excel Tables and you probably need to know a little bit of Power Query M (The query language that powers Get & Transform). There is some good news, you don't need to learn all of this if you don't want to, as we've done it for you, and as always, in the name of transparency, we will try to explain along the way what we've done and how its doing it. And maybe throw in the odd video as well.

Moving to Tables is mostly a good thing. It encourages good practice in the way you treat and organize your data but it does mean you are entering the realms and rigidity of database structures (sort of Microsoft Access by the back door!). But it does mean (at least while you've got your head stuck in a query) you have to leave behind some of the lovely freedoms of Excel. The quaint idea that a cell could be any old data type depending on what turned up doesn't wash in a Table.

There are three ways of connecting a dynamic variable, each slightly more complicated and they are all demonstrated in this example Excel Workbook:

1. Put your dynamic variable in a Table. A Table! Now with a Table, Power Query can potentially see this variable and so we can edit our original query to reference it.

2. Create a new query to reference our new Table and and edit our original query to reference this new query. Cleaner solution and if we need to transform our variable, we have the space to do so in this new query. Also we now have a dynamic variable query that we can re-use.

3. Even more editing to turn our second query into a Function. This is what we do in the X Sheet.

Whichever option you choose, you are going to come face to face with the Power Query M Language. With the third option, you will also need to enter the Advanced Editor.

Now I'm not gonna run through each stage of the process of creating the above three solutions because I'll be here all day but we have videos to show you exactly how to achieve each result. I will though highlight here the key components in each approach and any other useful observations.

1. Simple Table reference. Watch the video.

Once you have created your new table, you need to replace the static variable in the original web request with this:


So the key line in your original web query will look something like this depending on what you named your Table and its solitary Column:

Source = Web.Page(Web.Contents("" & Excel.CurrentWorkbook(){[Name="Company"]}[Content]{0}[Ticker])),

NOTE: Your variable doesn't have to be in an Table. You can just use a Named Range. It works because a Named Range is in fact a pseudo table. What you have to remember is that a Named Range has no column name so it enforces a default column name on it. So your ColumnName in this circumstance is Column1.

2. New query to reference our new variable Table. Best to watch the video.

You can easily make a query out of an existing Table in a sheet via Get & Transform. Tables are very keen to define each Column in a Table as a specific type. We need to change the Changed Type Applied Step to:

Table.TransformColumnTypes(Source,{{"Token", type text}}){0}[Token]

This also insures that we return a value rather than the whole table by adding the "{0}[Token]" 1st cell reference to the end.

The main query can now reference this new query which we called Authenticate in the video:

Source = Web.Page(Web.Contents("" & Authenticate & "&eid=" & Excel.CurrentWorkbook(){[Name="Company"]}[Content]{0}[Ticker]

In the video we actually managed to make all the changes without having to tussle with the Power M Query Language in the Advanced Editor.

Best to save/convert the query to a Connection Only query. Otherwise you'll end up with a new unwanted Table in a new sheet every time you exit the Query Editor. You need to find the "Load To..." option to do this.

WARNING: If you are on the default (strictest) Privacy setting, the main query won't run and you will get a message something like this:

Formula.Firewall: Information about a data source is required.


...references other queries or steps, so it may not directly access a data source. Please rebuild this data combination.

Excel, rather pathetically, can't distinguish between a local (our new variable Table) and an outside source as this juncture. By converting this query to a Function, as we do as the next option, resolves this piece of stupidity. We can of course also set our Privacy level either at the Workbook level or in  the Query Settings in the Query Editor (File>Options and Settings>Query Options>Privacy) to "We don't care" (Always Ignore Privacy Level Settings).

3. Turn our new query into a Function. And check out this video if you want to beef up the power!

This is our favorite option, the power Power Query option but you do have to dip your toe into the Advanced Query editor.

It saves the query from trying to create a new table in Excel every time unless you make it Connection Only and circumvents the mindless security objections as described above. You can also, rather usefully, opt to pass a variable to a Function as we do in the X Sheet. Its like proper programming! These advantages are spelled out in more detail in a post on my other blog - Jot About Excel. And again of course the Function you build is re-usable in other queries.

And in reality the changes are fairly simple. Top and tail your query with these two lines:

let Years = (YearNo as number) =>

in Years

where Years is what we named our new Function and YearNo is the variable we pass it (just use empty brackets if no variable is required). As you can see, if you are familiar with Lambda expressions, its uses a similar syntax.