Putting Some Numbers Around Amazon Prime

Amazon filed its 10-K report for 2016 late last week, and it adds a few bits of additional information which haven’t been in previous versions. Most notably, it provides a breakdown of revenue by similar product, which is the first real visibility we’ve had into Prime and certain other categories. It doesn’t report Prime directly, but there’s enough data here to provide some really interesting insights into the Prime program, how many members it has, how much revenue it generates, and how revenue is split between shipping and other services.

Amazon’s new revenue breakdown

First up, here’s the new breakdown. Revenue is split into five categories which, other than AWS, we haven’t seen broken out before:

  • Retail Products – this is basically all e-commerce plus most one-off sales of digital goods except those which are sold on a net basis (likely mostly apps); plus any direct shipping revenue associated with e-commerce purchases
  • Retail third-party seller services – this is all the revenue Amazon generates from its third party sellers including commissions, related fulfillment and shipping fees
  • Retail subscription services – Prime is the biggest component here, but it also includes Audible, Amazon Music, Kindle Unlimited, and other non-Prime subscriptions
  • AWS – Amazon Web Services, as reported in its segment reporting
  • Other – all the stuff that doesn’t fit in any other bucket, with co-branded credit cards and advertising the only two components called out specifically.

Here’s what that revenue breakdown looks like in percentage terms over the last three years:

As you can see, Product Sales are by far the largest component, but they’re falling rapidly as a percentage of the total, from 77% in 2014 to 67% in 2016, while the other categories are coming up fast. Here are the growth rates for the last two years for these various components:

As you can see, the growth rates are all over the map, with the fastest-growing that mysterious Other section, which I suspect is largely driven by Amazon’s small but flourishing ad business. eMarketer estimates that this is already a roughly billion-dollar business for Amazon, so that would make sense, though the growth rate here is much higher than eMarketer projects. Those credit cards must be doing well too.

But outside that, it’s worth noting that third party services are growing much faster than product sales, with retail products (i.e. Amazon’s own direct sales) the slowest growing of any of these categories. Both retail subscriptions and AWS are coming down somewhat in percentage growth terms, but largely as a factor of becoming quite big numbers – in both cases, the dollar growth year on year actually increased. That third party sellers are growing faster than first party is actually a good thing – Amazon’s margins on those services are much higher, because it only reports its cut rather than the gross take as revenue. This growth has been a major driver, alongside AWS, of Amazon’s increasing margins lately.

Deducing Prime subscribers

Let’s focus, though, on that retail subscriptions business, because that’s where Prime revenue sits. We need to make some assumptions about how much of that revenue is actually Prime to start. Morgan Stanley reckons it’s about 90%, and though I was originally tempted to say it was more than that, checking into the size of Audible made me think it’s probably about right. So I’m going to stick with that.

If we want to know subscriber numbers, though, we need to figure out what the average subscriber pays, and that’s a complex proposition because the price of Prime increased by $20 in 2014 in the US, and costs different amounts in each market. If we make reasonable assumptions about the mix of where those Prime subscribers are located (e.g. by using Amazon’s revenue split by country) and then apply the going rates at various times for a Prime subscription, we can arrive at a reasonable average. Mine starts at $76 in 2014 and rises to $81 in 2015 and $82 in 2016, whereas Morgan Stanley’s is at $88 for both 2015 and 2016.

On that basis, then, here’s a reasonable estimate for Prime’s subscriber numbers over the last four years, together with a sanity check in the form of the minimum possible number Amazon might have based on various public statements it’s made:

The numbers you end up with are just barely above those minimum numbers provided by Amazon. There’s no way to be 100% sure about my numbers, but they certainly imply that Amazon has been making the biggest possible deal out of its total number ever since that “tens of millions” comment at the end of 2013 (which referred to 21 million subscribers according to my estimate). These numbers would also help explain why Amazon didn’t provide a percentage growth number at the end of 2016 as it did in the previous two years: the percentage likely went down, again as a result of an increasingly large base, not lower subscriber growth – it added 20 million subs in 2016 versus 17 million in 2015.

Prime revenue allocation

One other interesting wrinkle which I’ve wondered about for a long time is the way Amazon allocates revenue between the components of its Prime service, which after all combines free two day shipping with a Netflix-like video subscription and various other benefits. Its financial reporting has always made clear that it allocates these portions of revenue to different buckets – specifically, its Net Product Sales and Net Service Sales categories – even though they all come from the same Prime subscriptions. Understanding this split may seem of purely academic interest, but in fact it’s key to divining the economics of Amazon’s Prime video business.

One interesting thing about the new grouping of revenues Amazon provided in its 10-K is that there is just one portion of revenue allocated differently here from in Amazon’s other reporting, and that’s the shipping component of Prime revenues. In the Net Product/Service Sales split, shipping goes into Product, whereas in the Similar Products split it goes into retail subscriptions. Therefore, if we look at the differences between the amounts reported in the various segments, we can deduce the Prime shipping component, and by implication the portion allocated to everything else (mostly video).

What you can see is that the revenue allocation is shifting quite significantly over time from shipping towards the rest – shipping was 63% or almost two thirds of the total in 2014, but only 56% of the total in 2016, and the actual numbers have both risen considerably. For comparison’s sake, the Prime shipping allocation is around a third of Amazon’s total shipping revenue.

Competing with Netflix on content will be tough at these levels

We can then compare Amazon’s non-shipping revenue (the vast majority of which should probably be seen as video revenue) against Netflix’s global streaming revenue:

What you see here is that Netflix’s revenue from its streaming business is massively larger, not least because it allocates the full $8-10 per month it collects from its nearly 100 million subscribers  to streaming, whereas even with 70 million subscribers, Amazon only allocates just under half to streaming.

This has significant implications for the viability of the two companies’ investments in original content. Netflix has committed to spending $6 billion in total on content in 2017, which is more than twice Amazon’s entire revenue from streaming video in 2016. To the extent that Amazon wants to be competitive in content, it either needs to lose money on the whole thing as a subsidy for its e-commerce business, or charge (or allocate) a lot more of its total take to streaming video. Interestingly, the standalone monthly Prime Video service Amazon offers comes in at $9, suggesting that without the flywheel benefits of free shipping, it needs to recoup far more like the total real cost of providing the streaming service.

Yet Another Reset for Twitter

Here we are, almost eleven years into Twitter’s history and a little over 18 months into Jack Dorsey’s second term at the company, and Twitter is heading for yet another reset. The company says it’s already been through a reset on its consumer-facing product, and that the changes it’s made are delivering results: positive year on year growth in daily active users, though Twitter still refuses to provide the underlying metric. It now says ad products need to go through a similar reset and re-focusing process. As a result of all this, the company isn’t even providing revenue guidance for Q1.

Here’s a quote from Twitter’s earnings call:

we remain focused on providing improved targeting, measurement and creative for direct response advertisers

Specifically, that’s from Twitter’s Q1 2015 earnings call, almost two years ago. But on today’s call, Anthony Noto said almost exactly the same thing again – some of this stuff has been in the works for over two years, and Twitter still doesn’t seem to be making meaningful progress. Rather, it’s now evaluating its direct response ad products to figure out which are delivering an appropriate return on the resources invested in them, with a view to killing some off.

Why is this all taking so long? It seems Twitter has been unable to focus on more than one big project at once, despite its arguably bloated workforce, and it’s hard to avoid the sense that this is mostly about management. It starts with Jack Dorsey, who is trying to run two public companies at once, but it continues with the next layer of management, where there’s been huge turnover in recent years and where product management seems to have been a particular challenge. It feels as though Dorsey at once wants to own product, because he has the authority of a founder in this area, but doesn’t really have the time to do it properly, which means both that things don’t get done and nominal heads of product get fed up.

The other big problem is that Twitter’s big competitors for direct response advertising – notably Facebook and Google – are just way better at this stuff than they are, and Twitter simply hasn’t made anywhere near enough progress here over the last few years. As a result, Twitter is enormously susceptible to competitive threats – its guidance for Q1 is so broad because there was a meaningful difference in competitive intensity between the beginning and end of January alone. Any company that can’t predict its revenue a quarter out with reasonable confidence because of the competitive environment is really struggling.

In the meantime, ad revenue is actually falling year on year, despite the modest MAU growth and apparent growth in DAUs. US ad ARPUs dropped 8% year on year in Q4, and total US revenue was down 5.3% despite flat MAUs. The supposed increased engagement simply isn’t translating into revenue growth. The revenue growth trend for Twitter as a whole is pretty awful:

In percentage terms, the growth rate has been falling since Q2 2014, but even in pure dollar terms, growth has been slowing for a year. The EBITDA guidance for Q1 suggests a pretty big drop in revenue in the quarter, extending the streak here.

What Twitter’s management said today in their shareholder letter and on the earnings call is that it will simply take time for the increased user growth and engagement to flow through, and that Twitter essentially has to convince advertisers that it’s making progress in getting users engaged. But advertisers don’t spend money because of user growth trends – they spend money because it’s effective, and stop spending where it isn’t. Twitter seems to have a fundamental issue convincing advertisers that money spent on the platform will actually pay off, and I don’t see that changing just because it tweaks some ad formats.

Digesting Snap’s S-1

Snap Inc (maker of Snapchat) finally made its long-awaited S-1 filing public on Thursday evening. I’ve been dying to get my hands on this filing for months, and spent some time diving into it last night and digesting some of the numbers and other information in it. Here’s a quick summary of what I’ve found and some of my conclusions about Snap’s prospects going forward. Below, I’ve embedded a slide deck which shares many of the individual charts in this post and several more – it’s part of the Jackdaw Research Quarterly Decks Service, which offers similar decks on the most important consumer tech companies each quarter to subscribers.

Massive revenue growth

The first thing to note is that Snap is growing extremely fast from a revenue perspective. It showed its first ad in late 2014, and had its first meaningful revenue in 2015 (totaling $59 million), and then passed $400 million in revenue in 2016. The quarterly revenue picture is shown in the chart below.

That’s a very fast ramp, enabled by the fact that Snap held back on monetizing its base for several years following its founding in 2011. Facebook, by contrast, started to monetize the year it launched, and generated $382,000 in revenue in 2004. Its revenue ramp was slower ($9 million in 2005, $48 million in 2006, $153 million in 2007, $272 million in 2008, and $777 million in 2009), but it didn’t hit Snap’s current user scale until 2009. When Facebook turned on revenue generation, it had under 1 million MAUs, whereas when Snap showed its first ad it had 71 million daily active users.

ARPU growth a major enabler

The major driver of this ramp in revenues is rapid growth in average revenue per user (ARPU), as shown in the next chart:

Global ARPU has risen from 5 cents in Q1 2015 to $1.05 in Q4 2016, but the main driver has been revenue from North America, where ARPU was already $2.15 last quarter. The ARPU ramp in other regions has been much slower, with Europe generating just 28 cents per user per quarter in Q4, and the rest of world region just 15 cents. The one dollar ARPU isn’t far off Facebook’s global ARPU in Q1 2012, the last quarter it reported before its IPO, which was $1.21 globally. But its US & Canada ARPU was already up to $2.90 and its European ARPU at $1.40.

Still a very US-centric financial picture

The reality is that Snap’s business is still very US-centric when it comes to generating revenue. North America had 43% of its users, but generated 88% of its revenues in Q4 2016 (over 98% of that coming from the US). That could be seen as an opportunity for Snap to broaden its horizons and put more effort into monetizing Snapchat in other regions, driving up ARPU, but this may also be a sign that Snap simply hasn’t gained the same traction in other regions yet. It increased its sales and marketing headcount by 340% in 2016, so there’s a good chance it’s hiring in these other markets to drive higher ad sales there.

Profits are another story entirely

While Snap’s revenue picture is fairly clear, the bottom line is a lot less healthy – Snap is losing money by the truckload. This may be one of the first companies I’ve seen file for an IPO whose cost of revenue alone outweighs its revenue in the most recent financial year.

Most margins are literally off the charts

It literally makes no sense to include here one of my customary charts showing various margins over time, because both of the biggest ones – operating and net margins – have been at -100% or multiples of it throughout Snap’s reported history (the only time I’ve seen anything like it is when looking at Alphabet’s Other Bets segment). Gross margin is the only one which is anywhere near positive, and was positive in the second half of 2016:

Snap’s cost of revenue is made up of two larger buckets and some smaller ones – hosting costs are by far the largest, and those scale fairly directly with user growth. Snap doesn’t break these hosting costs out in detail, but they grew by $192 million in 2016, and total cost of revenue in 2016 was $452 million, so my guess is that hosting costs were around $300-350 million in 2016. Snap signed a deal in January with Google to extend its use of Google’s cloud infrastructure, which has a minimum revenue commitment of $400m for each of the next five years, so it’s a good bet Snap is expecting to spend at least that much in 2017.

The second largest, albeit much smaller, contributor to cost of revenues is Snap’s revenue share with its publisher partners. When Snap sells ads (which it did for 91% of its ad revenue in 2016), it gives publishers a cut, and this revenue share amounted to $58m in 2016, up from just under $10m in 2015. When partners sell the ads, they give Snap a cut, and it records only this net amount as revenue, so there’s no reported cost of revenue associated with that smaller chunk. The only other notable contributors to cost of revenue are content creation, where expenses rose $13m in 2016, and inventory for Spectacles, which only hit the books in late 2016.

I usually like to include a chart on cost components as a percentage of revenue, but in Snap’s case it makes more sense to show them as a multiple of revenues, as for most of the company’s history that’s what they’ve been. The two charts below show first a zoomed out view over the whole of the reported period and then a slightly shorter-term view excluding total costs and expenses, to make it easier to see what’s happening in detail with some of these expense lines.

Because Snap is so early in its monetization effort, some of its cost components were multiple times its revenues even in late 2015, and its cost of revenue was still almost twice its revenue in Q1 2016. But as the charts above show, there’s been some real progress here, and R&D, Sales & Marketing, and General & Administrative costs are all under half of revenues now and falling. Snap still has a long way to go, though, before it can be profitable: cost of revenue needs to come down considerably as a percentage of revenue, and that means ramping up ARPU to better cover those massive hosting costs. The rest of the costs will continue to come down as a percentage of revenue as Snap scales, so profitability should improve steadily on that front assuming Snap can get back to strong growth (more on this below).

It’s worth remembering that, when Facebook IPO’d in May 2012, it had been net profitable for three years. Meanwhile, Snap’s prospectus says matter of factly, “We have incurred operating losses in the past, expect to incur operating losses in the future, and may never achieve or maintain profitability.” Though that profitability should come in time with scale and rising ARPU, it’s not a certainty. Twitter is another company which had its IPO at a time when it wasn’t profitable but it seemed a continuation of past rapid growth would carry it over the line soon, and yet it still isn’t in the black now (and in fairness, Twitter had a similar though slightly less bleak warning about its own profit prospects in its S-1).

User growth is a mixed bag

Snapchat reports only daily active users, and not monthly active users. That’s actually very sensible, and I always take it as a knock on Twitter that it refuses to give DAU figures – for an app that’s supposed to be a regular daily habit, monthly user numbers are a bit meaningless.

Linear annual growth

Daily active users have grown strongly over Snapchat’s history, as shown in the chart below, which shows the longer-term end of year picture, including some estimates based on milestones Snap provides in the S-1.

The annual picture is incredible – I don’t know when I’ve seen such a straight line for user growth from a base of almost zero (it was roughly a million at the end of 2012, and only a few thousand at the end of Snap’s first year, 2011). The chart below compares this growth to Facebook’s growth over a similar period. It’s worth noting that the Facebook number here is MAUs, whereas Snap’s is DAUs, but the comparison is striking:

I’ve aligned the timescales so that the years when the companies had 1 million users by their respective measures (2012 for Snapchat, and 2004 for Facebook) line up. As you can see, the start and end points are not far off from each other – 1m in the first year, and 161 versus 145 million in the fifth year, but the trajectory in-between is very different. Facebook saw the classic s-curve adoption, while Snap’s has been almost linear.

A much less straight line for quarterly growth

Things get a loss less linear when you look at quarterly growth numbers, as shown below.

There’s something of the s-curve in the first two thirds or so of the chart above, where growth appears to accelerate through late 2015 and early 2016, but it tapers off significantly in late 2016. What happened there depends on who you believe, as there are two possible explanations:

  • Snap’s own explanation is that a number of product improvements in late 2015 and early 2016 accelerated growth and brought forward some of the growth it would have seen later anyway, while in late 2016 it launched a version of its Android app which had some bugs and caused slower growth
  • Third party data suggests that Snapchat began to slow down after Instagram launched its Stories feature, a clone of Snapchat’s own, which drove faster growth at Instagram and sucked usage and growth from Snapchat.

In fairness, Snap does acknowledge strong competition in the second half of 2016, but not Instagram specifically. Which explanation you believe is critically important for your view of Snap’s future prospects: if user growth really did slow down because of the competitive threat from Instagram, that isn’t going away, and in fact will only strengthen as Facebook brings Stories to the News Feed. If Snap can’t defend itself against such competitive threats, and if those threats cause an ongoing stagnation in user growth, it becomes a lot less appealing as an investment. On the other hand, if the issues really were a temporary combination of lumpy growth across the year and some Android glitches, that’s a much less gloomy statement about Snap’s future.

Differences by region

Where things get interesting is when you look at the regional breakdown of DAUs which Snap provides in the S-1 – the first of the charts below shows actual DAUs, while the second shows sequential growth in DAUs, both by region.

As you can see, there was an acceleration in late 2015 and early 2016 as Snap says, but there was also a slowdown in late 2016, though to very different extents in the regions. In North America and Europe, sequential growth in Q3 and Q4 was similar to its growth in the early part of the chart, but in the Rest of World region it dropped down to zero in Q4. Now, these figures are inherently lumpy – though they’re stated in whole millions of DAUs, the underlying numbers could be moving more subtly than these zigzag lines suggest, but there does seem to have been a meaningful slowing in Q3 and Q4, and that is worrying.

Terrible timing for the IPO

We won’t really know whether Snap’s explanation or the external explanation (or some combination of the two) is correct until we see another quarter or two of data from Snap on its user base. If it returns to strong growth in Q1 and Q2 of this year, investors can breathe a sigh of relief, but if it doesn’t, then the worries will continue, If I were a potential investor, I’d be very wary of making big commitments to Snap in a March IPO, before any of those figures are known.

The broader worry with Snap’s data here is that it really only provides DAU numbers as a measure of engagement. That’s better than MAUs, as I said above, but it still doesn’t tell you how engaged users are. This recent article in Bloomberg talks about the ways in which Snapchat fosters “streaks” by users, which drive them to open the app at least once a day, but which don’t necessarily drive meaningful engagement. The only deeper engagement stats Snap does provide relate to time spent and the number of times the app is opened – time spent across its base is 25-30 minutes on average, while the app is opened 18 times on average, with younger users skewing higher and older users skewing lower. But as Snap provides no longitudinal reporting on these data points, we have no idea how they’re trending over time and what that might tell us about real engagement.

For both investors and advertisers, knowing what engagement really looks like is critical, but that data is missing here. Snap badly needs user growth along with rising ARPU if it’s to make progress towards profitability, and at this point the user growth side of the equation is uncertain, though ARPU looks to be on a healthier trajectory. Put another way, the timing of this IPO couldn’t be worse – rather than coming at a time of strong, consistent growth, it comes at the first time in Snap’s history when it’s showing signs of significant weakness.

Cord Cutting in Q3 2016

I do a piece most quarters after the major cable, satellite, and telecoms operators have reported their TV subscriber numbers, providing an update on what is at this point a very clear cord-cutting trend. Here is this quarter’s update.

As a brief reminder, the correct way to look at cord cutting is to focus on three things:

  • Year on year subscriber growth, to eliminate the cyclical factors in the market
  • A totality of providers of different kinds – i.e. cable, satellite, and telco – not any one or two groups
  • A totality of providers of different sizes, because smaller providers are doing worse than larger ones.

Here, then, on that basis, are this quarter’s numbers. First, here’s the view of year on year pay TV subscriber changes – a reported – for the seventeen players I track:

year-on-year-net-adds-all-public-players

As you can see, there’s a very clear trend here – with one exception in Q4 2015, each quarter’s year on year decline has been worse than the previous one since Q2 2014. That’s over two years now of worsening declines. As I’ve done in previous quarters, I’m also providing a view below of what the trend looks like if you extract my estimate for DISH’s Sling subscribers, which are not classic pay TV subs but are included in its pay TV subscriber reporting:

year-on-year-net-adds-minus-sling

On that basis, the trend is that much worse – hitting around 1.5 million lost subscribers year on year in Q3 2016.

It’s also worth noting that once again these trends differ greatly by type and size of player. The chart below shows net adds by player type:net-adds-by-player-type

The trend here has been apparent for some time – telco subs have taken a complete nosedive since Verizon ceased expanding Fios meaningfully and since AT&T shifted all its focus to DirecTV following the announcement of the merger. Indeed, that shift in focus is extremely transparent when you look at U-verse and DirecTV subs separately:att-directv-subs-growth

The two combined are still negative year on year, but turned a corner three quarters ago and are steadily approaching year on year parity, though not yet growth:

att-combined-subsCable, on the other hand, has been recovering somewhat, likely benefiting from the reduced focus by Verizon and AT&T on the space with their telco offerings. The cable operators I track collectively lost only 81k subscribers year on year, compared with well over a million subscribers annually throughout 2013 and 2014. Once again, that cable line masks differences between the larger and smaller operators, which saw distinct trends:

cable-by-size

The larger cable operators have been faring better, with positive net adds collectively for the last two quarters, while smaller cable operators like Cable ONE, Mediacom, Suddenlink, and WideOpenWest collectively saw declines, which have been fairly consistent for some time now.

The improvement in the satellite line, meanwhile, is entirely due to the much healthier net adds at DirecTV, offset somewhat by DISH’s accelerating declines. Those declines would, of course, be significantly worse if we again stripped out Sling subscriber growth, which is likely at at around 600-700k annually, compared with a loss of a little over 400k subs reported by DISH in total.

A quick word on Nielsen and ESPN

Before I close, just a quick word on the Nielsen-ESPN situation that’s emerged in the last few weeks. Nielsen reported an unusually dramatic drop in subscribers for ESPN in the month of October, ESPN pushed back, Nielsen temporarily pulled the numbers while it completed a double check of the figures, and then announced it was standing by them. The total subscriber loss at ESPN was 621,000, and although this was the one that got all the attention, other major networks like CNN and Fox News lost almost as many.

In the context of the analysis above, 500-600k subs gone in a single month seems vastly disproportionate to the overall trend, which is at around 1-1.5 million per year depending on how you break down the numbers. Additionally, Q4 is traditionally one of the stronger quarters – the players I track combined actually had positive net adds in the last three fourth quarters, and I suspect for every fourth quarter before that too. That’s what makes this loss so unexpected, and why the various networks have pushed back.

However, cord cutting isn’t the only driver of subscriber losses – cord shaving is the other major driver, and that makes for a more feasible explanation here. Several major TV providers now have skinny bundles or basic packages which exclude one or more of the major networks that saw big losses. So some of the losses could have come from subscribers moving to these bundles, or switching from a big traditional package at one operator to a skinnier one elsewhere.

And of course the third possible explanation is a shift from traditional pay TV to one of the new online providers like Sling TV or Sony Vue. Nielsen’s numbers don’t capture these subscribers, and so a bigger than usual shift in that direction would cause a loss in subs for those networks even if they were part of the new packages the subscribers moved to on the digital side. The reality, of course, is that many of these digital packages are also considerably skinnier than those offered by the old school pay TV providers – DirecTV Now, which is due to launch shortly, has 100 channels, compared with 145+ on DirecTV’s base satellite package, for example.

This is the new reality for TV networks – a combination of cord cutting at 1.5 million subscribers per year combined with cord shaving that will eliminate some of their networks from some subscribers’ packages are going to lead to a massive decline in subscribership over the coming years. Significant and accelerating declines in subscribers are also in store for the pay TV providers, unless they participate in the digital alternatives as both DISH and AT&T/DirecTV are already.

The US Wireless Market in Q3 2016

One of the markets I follow most closely is the US wireless market. Every quarter, I collect dozens of metrics for the five largest operators, churn out well over a hundred charts, and provide analysis and insight to my clients on this topic. Today, I’m going to share just a few highlights from my US wireless deck, which is available on a standalone basis or as part of the Jackdaw Research Quarterly Decks Service, along with some additional analysis. If you’d like more information about any of this, please visit the Jackdaw Research website or contact me directly.

Postpaid phones – little growth, with T-Mobile gobbling up most of it

The mainstay of the US wireless industry has always been postpaid phones, and it continues to account for over half the connections and far more than half the revenues and profits. But at this stage, there’s relatively little growth left in the market – the four main carriers added fewer than two million new postpaid phone customers in the past year, a rate that has been slowing fairly steadily:

postpaid-phone-net-adds-for-big-4This was always inevitable as phone penetration began to reach saturation, and as the portion of the US population with good credit became particularly saturated. But that reality means that future growth either can’t come from postpaid phones, or has to come through market share gains almost exclusively.

In that context, then, T-Mobile has very successfully pursued the latter strategy, winning a disproportionate share of phone customers from its major competitors over the last several years. The chart below shows postpaid phone net adds by carrier:postpaid-phone-net-adds-by-carrier

As you can see, T-Mobile is way out in front for every quarter but Q2 2014, when AT&T preemptively moved many of its customers onto new cheaper pricing plans. AT&T has been negative for much of the last two years at this point, while Sprint has finally returned to growth during the same period, and Verizon has seen lower adds than historically. What’s striking is that T-Mobile and Sprint have achieved their relatively strong performances in quite different ways. Whereas Sprint’s improved performance over the past two years has been almost entirely about reducing churn – holding onto its existing customers better – T-Mobile has combined reduced churn with dramatically better customer acquisition.

The carriers don’t report postpaid phone gross adds directly, but we can derive total postpaid gross adds from net adds and churn, and I find the chart below particularly striking:
gross-adds-as-percent-of-base

What that chart shows is that T-Mobile is adding far more new customers in proportion to its existing base than any of the other carriers. Sprint is somewhat close, but AT&T and Verizon are far behind. But the chart also shows that this source of growth for T-Mobile has slowed down in recent quarters, likely as a direct effect of the slowing growth in the market overall. And that slowing gross adds number has translated into lower postpaid phone net adds over the past couple of years too:

t-mobile-postpaid-phone-net-adds-by-quarter

That’s a bit of an unconventional chart, but is shows T-Mobile’s postpaid phone net adds on an annual basis, so you can see how each year’s numbers compare to previous years’. As you can see, for most of 2015 and 2016, these net adds were down year on year. The exceptions were again around Q2 2014, and then the quarter that’s just ended – Q3 2016, when T-Mobile pipped its Q3 2015 number ever so slightly. The reason? Likely the launch of T-Mobile One, which I wrote about previously. The big question is whether T-Mobile will return to the declining pattern we saw previously when the short-term effects of the launch of T-Mobile One wear off.

Smartphone sales – slowing on postpaid, holding up in prepaid

All of this naturally has a knock-on effect on sales of smartphones, along with the adoption of the new installment plans and leasing, which are breaking the traditional two-year upgrade cycle. The number of new smartphones in the postpaid base has been slowing dramatically over the last couple of years too:

year-on-year-growth-in-postpaid-smartphone-base

But the other thing that’s been happening is that upgrade rates have been slowing down significantly too. From a carrier reporting perspective, the number that matters here is the percentage of postpaid devices being upgraded in the quarter. This number has declined quite a bit in the last couple of years too, across all the carriers, as shown in the cluster of charts below:

postpaid-device-upgrade-rate-for-all-4-carriers

The net result of this is fewer smartphones being sold, and the number of postpaid smartphones sold has fallen year on year for each of the last four quarters. Interestingly, the prepaid sales rate is holding up a little better, likely because smartphone penetration is lower in the prepaid market. There were also signs in Q3 that the new iPhones might be driving a slightly stronger upgrade cycle than last year, which could be good for iPhone sales in Q4 if that trend holds up through the first full quarter of sales.

What’s interesting is that the upgrade rates are very different between carriers, and T-Mobile in particular captures far more than its fair share of total sales, while AT&T captures far less than it ought to. The chart below compares the share of the smartphone base across the four major carriers with the share of smartphone sales:

smartphone-base-versus-sales

As you can see, T-Mobile’s share of sales is far higher than its share of the base, while AT&T’s (and to a lesser extent Verizon’s) is far lower.

Growth beyond phones

So, if postpaid phone growth is slowing, growth has to come from somewhere else, and that’s very much been the case. Tablets had been an important source of growth for some of the carriers for a few years, but their aggressive pursuit has begun to cost them dearly now, at least in the case of Sprint and Verizon. Both carriers had promotions on low-cost tablets two years ago and are now finding that buyers don’t feel the need to keep the relationship going now their contracts are up. Both are seeing substantial tablet churn as a result, and overall tablet net adds are down by a huge amount over the past year:

tablet-net-adds

There may be some recovery in tablet growth as Verizon and Sprint work their way through their churn issues, but I suspect this slowing growth is also reflective of broader industry trends for tablets, which appear to be stalling. Still in postpaid, there’s been a little growth in the “other” category, too, but that’s mostly wireless-based home phone services, and it’s not going to drive much growth overall. So, the industry likely needs to look beyond traditional postpaid services entirely.

Prepaid isn’t growing much faster

The next big category for the major operators is prepaid, which has gone through an interesting evolution over the last few years. It began as the option for people who couldn’t qualify for postpaid service because of poor credit scores, and was very much the red-headed stepchild of the US wireless industry, in contrast to many other markets where it came to dominate. But there was a period a few years back where it began to attract customers who could have bought postpaid services but preferred the flexibility of prepaid, especially when prepaid began to achieve feature parity with postpaid. However, that ebbed again as installment plans took off on the postpaid side and made those services more flexible. Now, we’re going through yet another change as a couple of the big carriers use their prepaid brands as fighter brands, going after their competitors’ postpaid customers. The result is that those two carriers are seeing very healthy growth in prepaid, while the other operators are struggling.  In the chart below, I’ve added in TracFone, which is the largest prepaid operator in the US, but not a carrier (it uses the other operators’ networks on a wholesale basis):

prepaid-net-adds

As you can see, AT&T (mostly through its Cricket brand) and T-Mobile (mostly through its MetroPCS brand) have risen to the top, even as Sprint has gone rapidly downhill and Verizon and TracFone have mostly bounced around roughly at or below zero. There is some growth here, but it’s all being captured by the two operators, while the others are treading water or slowly going under.

Connected devices – the fastest-growing category

The fastest-growing category in the US wireless market today is what are called connected devices. For the uninitiated, that probably requires something of an explanation, since you might think of all wireless connections as being connected devices. The best way to think about the connected devices category is that these are connections sold for non-traditional things, so not phones and mostly not tablets either, but rather connected cars, smart water meters, fleet tracking, and all kinds of other connections which are more about objects than people. The one exception is the wireless connections that get bundled into some Amazon Kindle devices as part of the single upfront purchase, where the monthly bill goes to Amazon and not the customer.

This category has been growing faster than all the others – the chart below shows net adds for the four major categories we’ve discussed so far across the five largest operators, and you can see that connected devices are well out in front over the past year or so:comparison-of-net-adds

Growth in this category, in turn, is dominated by two operators – AT&T and Sprint, as shown in the chart below (note that Verizon doesn’t report net adds in this category publicly):connected-devices-net-adds

At AT&T, many of these net adds are in the connected car space, where it has signed many of the major car manufacturers as customers. The rest of AT&T’s and most of Sprint’s are a mix of enterprise and industrial applications, along with the Kindle business at AT&T. T-Mobile also has a much smaller presence here, and Verizon has a legacy business as the provider of GM’s OnStar services as well as a newer IoT-focused practice.

Though the connection growth here is healthier than the other segments, the revenue per user is much lower, in some cases only single digit dollars a month. However, this part of the market is likely to continue to grow very rapidly in the coming years even as growth in the core postpaid and prepaid markets evaporates, so it’s an important place for the major carriers to invest for future growth.

MacBook Pro with Touch Bar Review

Note: the version of this post on Medium has larger images and other benefits – I recommend you read it there.

On Thursday morning last week, Apple sent me a review unit of the new MacBook Pro with Touch Bar for testing. I’ve been using it almost non-stop since, to try to put it through its paces and evaluate this latest power laptop from Apple. I’ve only had four days with it, and so this is probably best seen as a set of early impressions rather than a thoroughgoing review, but here are my thoughts on using it so far. I’ll cover quite a few bases here, but my main focus will be on addressing two particular issues which I suspect people will have the most questions about: the Touch Bar and the power of this computer to do heavy duty work.

The model I’m using

First off, here’s the model I’m using:

mbp-2016-specs

In short, this is the 15-inch version, with 16GB of RAM, but it’s not the highest-end model. There is a version with a 2.9GHz processor and a Radeon Pro 460 graphics card, which would be a good bit more powerful for some tasks than the machine I’m using, though the RAM on that computer is the same.

I’m coming to this experience from using two main Macs over the past couple of years. When I’m at my desk, I’m typically using a 2010-version Mac Pro with 32GB of RAM, a processor with 12 2.66GHz cores, a massive SSD, and a Radeon GPU. When I’m mobile, I’m using a MacBook Air from a couple of years ago, with 4GB of memory and an Intel graphics card. In most respects, at least on paper, this MBP is a big step up on the MBA, but is less powerful than the Mac Pro, with the exception of the graphics card.

The Touch Bar

So let’s start with the Touch Bar. I had a chance to play around with the Touch Bar a bit at the launch event, and found it intriguing. It was already clear then that this was the kind of feature that could save time and make workflows easier if done right, but that would also come with a learning curve, and my first few days using it more intensively have confirmed both of those perceptions.

An analogy

The best analogy I can think of is learning to touch type. My oldest daughter has recently gone through this process, and I remember going through it when I was about the same age. Before you start learning, you’ve probably got pretty good at the hunt-and-peck method, and may even be quite fast. When you start learning to touch type, a lot of it is about forcing yourself to change your habits, which can be painful. At first, you’re probably slower than before, and the temptation is to go back to doing what you’ve always done, because if feels like you’re going backwards. But over time, as you master the skill, you get faster and faster, and it feels even more natural. You’re also able to stay in the flow much better, watching the screen rather than the keys.

Learning to use the Touch Bar is a lot like that. If you already use a Mac regularly, you likely have pretty well-established workflows, combining mouse or trackpad actions, typing, and keyboard shortcuts. Suddenly, the Touch Bar comes along and gives you new ways of doing some of the things you’ve always done a certain way. A few may replace keyboard shortcuts, but the vast majority will instead be replacements for mouse or trackpad actions. The first step is remembering that these options are now available. The Touch Bar is quite bright enough to see in any lighting conditions, but it’s not intended to be distracting, so although you may be vaguely aware of it in your peripheral vision as you’re looking at the screen, it doesn’t draw your eye. You have to consciously remember to use it, a bit like how you have to consciously remember to use all your fingers when you’re learning to touch type.

At first, your instinct is to just keep doing things the way you’ve always done them. But then you start to realize that the repetitive task you’re doing by moving the mouse cursor away from the object you’re working with to the taskbar or to the Format pane at the side of the window could be accomplished much more easily by just pressing a button in the Touch Bar. You try it and it works great. The next time you do it a little more quickly, and pretty soon it’s a habit. That first couple of times it may take more time than your old method, because you’re having to break the old habit, but you quickly develop a new, more efficient, habit. Your mouse cursor stays by the object you’re working with (or out of the way entirely) and you go on with your work. I’ve been integrating the Touch Bar into some of my workflows over the last few days, and it’s now starting to become natural and I’m getting to the stage where things are faster than they were before.

Below are some samples that show the adaptability of the Touch Bar:

touch-bar-samples-560

This adaptability is one of the strengths of the Touch Bar — the way it morphs not just between apps but based on the context within each app too. The video below shows several examples in quick succession as I move between apps and between contexts within apps. You’ll see how rapidly it changes as I go through these (there’s no sound on the video):

Most of the buttons are either self-explanatory or familiar enough to be intuitive, but I did find a couple of cases where I simply had no idea what a button meant. Since you can’t hover over these buttons in the way you can an on-screen button, there’s really no way to find out either, which can be tricky.

Ultimately, as I’ve written previously, the Touch Bar represents a different philosophical approach to touch on laptops by Apple compared with Microsoft’s all-touch approach to computers. I’ve used a few Windows laptops with touch, and though there have been times when it was useful, it’s often frustrating – the screen tends to bounce away from you when you jab it with your finger and touch targets are often too small. Apple’s approach keeps the horizontal and vertical planes separate – the vertical plane on a MacBook is purely a display, while the horizontal plane is the one you interact with. This is easier on your hands and arms, and allows you to work more quickly because everything is within easy reach. The trackpads on Apple’s laptops have brought some of the benefits of touch to laptops over the last few years, and the Touch Bar takes this a step further.

Third party support

For now, the Touch Bar is only available in first-party applications on the Mac, and most of Apple’s own apps now support it. However, if you’re a typical Mac user it’s quite likely that you spend a fair amount of time in third-party apps, and that’s certainly the case with me. I spend a lot of my time on the Mac in Tweetbot and Evernote, for example, neither of which support the Touch Bar yet, except for auto-correction when typing, which is universal.

Apple demoed some third party apps with Touch Bar integration at its launch event, and below is a table of those apps whose developers have committed to supporting it so far:

touch-bar-support-560

For now, users will be able to take advantage of Touch Bar inside the Apple apps and a handful of others, and that will mean adapting some workflows but not others. The experience here is going to be like the early days of 3D Touch support on the iPhone – it will be nice to have for the apps where it’s available, but there will be a lot of apps where it doesn’t work yet. In some cases, that’s going to push users towards apps that do support the feature, as was the case with 3D Touch. And since support is relatively easy to build, I would guess many developers will get on board quickly once the laptops are out.

Touch ID

Since the Touch ID sensor is part of the Touch Bar strip, it’s worth mentioning that briefly too. For anyone who’s used Touch ID on an iPhone or iPad, the value proposition will be fairly obvious – this is a great way to unlock your device without using a password. To be sure, people probably unlock their laptops many fewer times per day than they do their phones, but it’s still a handy time-saver. I’ve had Apple Watch unlock set up on my MacBook Air for a few weeks, and found that useful, but didn’t feel the need to set it up on this MacBook Pro because Touch ID is actually faster.

But Touch ID goes beyond just unlocking — it can also be used for various other functions where you’d normally enter your system password, including certain app installations and system changes. When it’s available, an indicator shows up in the Touch Bar strip pointing to the sensor, which is handy, because it can’t always be used in place of a password.

Siri

It’s also worth discussing the Siri button that’s part of the Touch Bar too. I’ve been using Sierra on my existing Macs for a couple of months now, but haven’t made much use of Siri, in part because I can never remember which hot key I’ve set to invoke it, and clicking on the on-screen Siri button in the taskbar is too much trouble. Having a dedicated Siri button is definitely making me use Siri more.

Power and performance

On, then, to power and performance. I gave you the specs for the machine I’m testing earlier – it’s not the top of the line model, but given some of the commentary from the professional community and those claiming to speak on their behalf over the last couple of weeks, I wanted to put this side of the MacBook Pro to the test.

Testing

I’m not a regular user of heavy-duty creative apps, but I have used Final Cut Pro fairly extensively in the past, and have an Adobe Creative Cloud subscription which gives me access to other apps like Photoshop, Lightroom, Premiere, and Illustrator, some of which I use occasionally. As a first test, I imported some 4K video shot on my iPhone into the new version of Final Cut Pro and edited it. I checked all the boxes for analysis in the importing process, but it still completed quickly and without slowing down the computer. Both Final Cut and the other apps I had open continued to perform smoothly during the analysis and background tasks. The editing was smooth, and I got to use the new Touch Bar buttons at several points, adding in titles, transitions, and other elements, and then exported the file. Everything was quick and smooth, and the experience was very comparable to what I’m used to on my Mac Pro, which is where I’ve mostly used FCP in the past.

Next, I decided to push things a little harder and shot a longer 4K video while riding my bike. The bike was bumping around all over the place while recording, and as a result there was lots of movement and also rolling shutter issues in the video. I imported this video into Adobe Premiere, and then used the Warp Stabilizer effect to try to smooth out some of those issues. This task took quite a bit longer, but again the computer continued to function just fine while the task was underway, even when I simultaneously opened up Lightroom and imported several hundred RAW images from my DSLR. The fans did spin up during the Premiere background tasks, but I’ve noticed they’re quite a bit quieter on this new MacBook than on past MacBooks I’ve used, which I’d guess is due to the new fan design.

There is no doubt in my mind that this MacBook Pro is perfectly capable of handling heavy duty professional creative work. That’s not to say that a computer with more cores, more RAM, or an upgraded graphics card couldn’t do some of these tasks faster, but many creative professionals will have a stationary machine like a Mac Pro, an iMac, or something else back at their desk and will use the MBP when they’re on the go.

Input from creative professionals

As I mentioned, I’m not a creative professional, but I happen to have married into a family of them, so I checked in with three of my brothers in law who work as video professionals (two as editors and one as a producer). I asked them several questions about the hardware and software they use, their workflows, and attitudes towards these things in their places of work. Both the editors are currently using 5K iMacs with 32GB of RAM, and mostly use Adobe Premiere or Avid for editing (Final Cut Pro has fallen out of favor with the pro video editing crowd since the FCP X release, though at least one of them said that he expected the latest update to win some former users back to the Apple side). This MacBook Pro, which maxes out at 16GB, wouldn’t match the performance of one of those 5K iMacs, but could well be the kind of machine they’d take with them if they were editing or reviewing footage on set. And with the ability to drive two 5K monitors, they could even finish the job when back at the office on the same computer. It wouldn’t perhaps be as fast at some of the background tasks as an iMac or Mac Pro, but it would allow them to do the job just fine, and I think that’s the proper way to see this computer.

Portability

That brings me to the next thing that’s worth talking about, which is portability. The new 13″ MacBook Pro is being positioned as a successor of sorts to the 13″ MacBook Air — it has a similar footprint and weighs about the same, yet is far more powerful. This 15″ MacBook Pro, of course, is larger (and potentially even more powerful), and so obviously not to be seen as a direct replacement for the Air. But as that’s the transition that I’m making personally, it makes sense to make that comparison at least briefly. The MBP is clearly heavier and larger than the MBA, though not by as much as you might think. It weighs a pound more — 4 pounds versus 3 — but the footprint is very similar, and it’s actually thinner than the MBA at its thickest point. And of course it has four times the pixels on the screen. The images below should give you some sense of the size comparison:

unadjustednonraw_thumb_ac unadjustednonraw_thumb_ae unadjustednonraw_thumb_b1 ycz5pp1aq9kej4h6s1bmka_thumb_93

The true comparison, of course, is to the earlier 15″ MacBook Pro, which is roughly half a pound heavier and slightly thicker. I actually have an older 15″ MacBook Pro around as well, from about five or six years ago, and this thing is night and day from a size and weight perspective. Long story short, this is a very portable laptop, less so certainly than the 13″ one, but more so than any other 15″ Apple has ever made, and likely more so than most other 15″ laptops on the market today. And yet it has the power I talked about earlier.

Keyboard, Screen, and Audio

Three other hardware features are worth discussing at least briefly here.

Firstly, the keyboard. This keyboard takes the same approach as the keyboard on the 12″ MacBook, but is a new version which has a different dome switch which allows for more of a springy feel. I haven’t used the MacBook keyboard extensively, but this keyboard has been totally fine for me. I adjusted to it almost immediately, and it feels fine. I have noticed that typing on it is a little noisy, I think because I’m using as much weight as I have used in the past on laptops with more key travel, and so I’m slowly adjusting my weight, which is resulting in a quieter experience.

The screen on this thing is beautiful. Apple now has P3 color on its newest iPhones, iPads, and MacBook Pros, and it’s a really nice improvement. I took some pictures of the Pro next to the Air to try to capture this, but it’s hard to get right in a photograph. However, looking at them side by side, there is both deeper color and a noticeably brighter screen on the Pro. And of course it’s a Retina display too, so the screen looks much sharper too. The combination of the Retina resolution and the brightness and color gamut make it really nice for watching videos. I spent some time over the weekend watching a variety of video on it, and it was one of the nicest displays I’ve ever used for this.

Lastly, the sound. The new MacBook Pro has different speakers, and they’re quite a bit louder than on the MacBook Air. In my office, I have a stereo hooked up to an AirPort Express for AirPlay and play all my music that way, but the new Pro will do fine even on its own for sound volume and quality. I tested with a random iTunes track, as you can hear in the audio clip below. I recorded using an iPhone placed between the two laptops.

The sound quality is noticeably louder and fuller on the MacBook Pro, as I hope you can hear in that sample. Again, this makes it perfect for watching movies in your spare time, as well as for listening to music.

Ports and adapters

Another thing I’ve seen some concern about with this new MacBook is the ports, all four of which are Thunderbolt 3 / USB-C. That’s a new port for me – I’ve never owned a computer with a USB-C port, though two of the smartphones I’ve tested recently (the Google Pixel and LeEco Pro3) have USB-C charging. As a result, I was interested to see how I’d get by with my existing peripherals.

I made a trip to the Apple Store and picked up a few adapters:

  • Two USB-A to USB-C adapters for my USB peripherals
  • A Thunderbolt 2 to Thunderbolt 3 adapter for my Thunderbolt display
  • A USB-C Digital AV Multiport Adapter for another display that uses HDMI.

Of course, all these adapters are discounted until the end of the year, which was nice because cost adds up fast on some of these. All of them worked fine, and I’ve appreciated being able to plug in any of these various peripherals on either side. It’s particularly nice to be able to shift power from side to side based on where the nearest outlet is.

This is a classic Apple situation – removing ports before the world has necessarily moved on, in part as an attempt to move people along. But in this case Apple is particularly far ahead of the market, and so these adapters are a concession to that reality. Some people will already have USB-C or Thunderbolt 3 peripherals such as hard drives, and these will become increasingly common over the next few years. Along with the adapters, Apple sells a variety of LaCie, G-Tech, and Sandisk storage devices and the LG displays, which support USB-C natively.

But for now, we’re going to be using adapters when we use a number of existing peripherals. I already have a pocket full of adapters in my work bag for my MacBook Air, for presenting, using Ethernet cables, and so on, so I’m used to this situation. And as I pointed out on Twitter recently, even if you buy all the adapters Apple recommends as you go through the buying process for a new MacBook Pro, the cost is a tiny fraction of the total (and of course less than full price between now and December 31). I will say that it feels a bit odd with a brand new iPhone and a brand new computer not to be able to plug one into the other out of the box, though I suspect many users no longer plug their iPhones into their computers at all.

Design

This is the first MacBook Pro to be available in Space Gray, and it’s a nice new option (this is the one Apple sent me, and in person it looks darker than in most of the pictures in this post). It’s sleek looking, and smudges and scratches will show up a lot less on this surface than on the bright silver surface of earlier MacBooks. It’s a good looking computer overall too, regardless of the finish. The display takes up much of the vertical plane, with fairly small bezels (one of the ways Apple was able to shrink the footprint), while the horizontal plane looks really good with the addition of the Touch Bar and a larger trackpad.

img_8210

I’ve found that trackpad to be totally fine, by the way — even though it’s consistently under the heels of my hands, I’ve never once accidentally moved the cursor or clicked on anything while typing because of it. I will say that I use the bottom right corner for right clicking and that’s now a long way from the center of the trackpad, which has resulted in some failed right-clicks when I haven’t moved far enough with my fingers. If you tend to use Control-click instead of bottom-right click, then this obviously won’t be an issue. I have also noticed that if the laptop is resting on my lap rather than on a table, there’s something about the angle of my hand on the trackpad that sometimes accidentally right clicks when I’m trying to click in the center of the trackpad, because another part of my hand is resting on the bottom right corner of the trackpad. This happens because the trackpad is really very close to the edge of the computer now on the side closest to you, so that the heel of your hand can easily stray onto the trackpad when resting on the edge.

Miscellaneous glitches

I did have one or two glitches here and there. For the first day and a half I was using the MacBook Pro, it would lose WiFi connectivity when it went to sleep, and fail to reconnect. After a restart, this issue seemed to resolve. Secondly, while I left Adobe Premiere processing video and stepped away for a few minutes, the computer went to sleep, and when I woke it, the whole computer did a hard crash, restarting out of the blue. Lastly, I had an occasion when the computer hung to the extent that I had to restart it.

I’m not used to having these issues regularly on Macs, though I’ve experienced each of them on occasion in the past. It was odd to have these happen in quick succession, and I’m not sure what to ascribe that to – Apple says it hasn’t seen these issues itself in testing. I will say that none of these issues has happened twice, but I’ll be watching for more of this stuff to see if these were just flukes.

Conclusions

This is a really solid new laptop from Apple. I wrote after the launch event that Apple now has the most logical lineup of laptops it’s had in a long time, with a clear progression in terms of power, portability, and price. Even within the new MacBook Pro range, there are size, power, and feature options. But all of these are intended to be pro computers.

That’s not to say they’re all intended to be the only computer someone who uses heavy-duty creative apps needs – the Mac Pro and iMac are there at least in part to meet those needs. But these are computers that the vast majority of people who use a Mac for work would be fine to use as their only machine – that’s certainly the case for me. This 15″ version I’ve been testing is slightly less portable than the 13″ version, but can be significantly more powerful, and could handle pretty much any video or photo editing task you’d want to throw at it. Yes, there are desktops including Apple’s that could perform some of those tasks more quickly, but this laptop is intended for someone who needs portability too, and that’s the point here. Every computing device involves compromises – here, portability has been prioritized over raw power, but not in such a way that makes this computer useless for powerful tasks.

All that would be true even if the Touch Bar didn’t exist, and yet it does. It’s a really nice addition to what’s already a great computer, and once you get some way along the learning curve it really speeds up tasks and makes life easier on your hands. As third party developers embrace it, it’ll be even more universally useful, and I wouldn’t be surprised if we see some developers using the Touch Bar in really innovative ways within their apps. Can you live without it? Absolutely – all of us have until now. But it’s a great addition if you’re in the market for a new laptop.

Facebook, Ad Load, and Revenue Growth

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website. If you want to contact me directly, you’ll find various ways to do so here.

Facebook and ad load have been in the news a bit the past few days, since CFO David Wehner said on Facebook’s earnings call that ad load would be a less significant driver of revenue growth going forward. I was listening to the call and watching the share price, and it was resolutely flat after hours until the moment he made those remarks, and then it dropped several percent. So it’s worth unpacking the statement and the actual impact ad load has as a driver of ad growth a bit.

A changing story on ad loads

First, let’s put the comments on ad load in perspective a bit. It’s worth looking at what’s been said about ad loads on earlier earnings calls to see how those comments compare. Here’s some commentary from the Q4 2015 call:

So, ad load is definitely up significantly from where we were a couple of years ago. And as I mentioned, it’s one of the factors driving an increasing inventory. Really one thing to kind of think about here is that improving the quality and the relevance of the ads has enabled us to show more of them and without harming the experience, and our focus really remains on the experience. So, we’ll continue to monitor engagement and sentiment very carefully. I mentioned that we expect the factors that drove the performance in 2015 to continue to drive the performance in 2016. So, I think that’s the color I can give on ad loads.

Here’s commentary from a quarter later, on the Q1 2016 call:

So on ad load, it’s definitely up from where we were couple of years ago. I think it’s really worth emphasizing that what has enabled us to do that is just improving the quality and the relevance of the ads that we have, and that’s enabled us to show more of them without harming the user experience at all. So that’s been really key. Over time, we would expect that ad load growth will be a less significant factor driving overall revenue growth, but we remain confident that we’ve got opportunities to continue to grow supply through the continued growth in people and engagement on Facebook as well as on our other apps such as Instagram.

Some of that is almost a carbon copy of the Q4 commentary, but note the second half of the paragraph, where Wehner goes from saying 2016 would be like 2015 to saying that over time ad load would be a less significant driver. This is something of a turning point. Now, here’s Q2’s commentary:

Additionally, we anticipate ad load on Facebook will continue to grow modestly over the next 12 months, and then will be a less significant factor driving revenue growth after mid-2017. Since ad load has been one of the important factors in our recent strong period of revenue growth, we expect the rate at which we are able to grow revenue will be impacted accordingly

These remarks turn “over time” into the more specific “after mid-2017”. Now here’s the Q3 commentary that caused the stock drop:

I also wanted to provide some brief comments on 2017. First on revenue, as I mentioned last quarter, we continue to expect that ad load will play a less significant factor driving revenue growth after mid-2017. Over the past few years, we have averaged about 50% revenue growth in advertising. Ad load has been one of the three primary factors fueling that growth. With a much smaller contribution from this important factor going forward, we expect to see ad revenue growth rates come down meaningfully….

Again, it feels like there’s an evolution here, even though Wehner starts out by saying he’s repeating what he said last quarter. What’s different now is the replacement of “less significant factor driving revenue” with “much smaller contribution from this important factor”, and “the rate at which we are able to grow revenue will be impacted accordingly” to “ad revenue growth rates come down meaningfully“. Those changes are both a matter of degree, and they feel like they’re intended to suggest a stronger reduction in growth rates going forward.

Drivers of growth

However, as Wehner has consistently reminded analysts on earnings calls, ad load is only one of several drivers of growth for Facebook’s ad revenue. The formula for ad revenue at Facebook is essentially:

Users x time spent x ad load x price per ad

To the extent that there’s growth in any of those four components, that drives growth in ad revenue, and to the extent that there’s growth in several of them, there’s a multiplier effect for that growth. To understand the impact of slowing growth from ad load, it’s worth considering the contribution each of these elements makes to overall ad revenue growth at the moment:

  • User growth – year on year growth in MAUs has been running in the mid teens, with a rate between 14 and 16% in the last year, while year on year growth in DAUs has been slightly higher, at around 16-17% fairly consistently
  • Time spent – Facebook doesn’t regularly disclose actual time spent, but has said recently that this metric is also up by double digits, so at least 10% year on year and perhaps more
  • Ad load – we have no metric or growth rate to look at here at all, except directionally: it rose significantly from 2013 to 2015, and continues to rise, but will largely cease to do so from mid-2017 onwards.
  • Price per ad – Facebook has regularly provided directional data on this over the last few years, but it’s been a highly volatile metric unless recently, with growth spiking as mobile took off, and then settling into the single digits year on year in the last three quarters.

So, to summarize, using our formula above, we have growth rates as follows: 16-17% user growth plus 10%+ growth in time spent plus an unknown growth in ad load, plus 5-6% growth in price per ad.

The ad load effect

Facebook suggests that ad load is reaching saturation point, so just how loaded is Facebook with ads today? I did a quick check of my personal Facebook account on four platforms – desktop web, iOS and Android mobile apps, and mobile web on iOS. I also checked the ad load on my Instagram account. This is what I found:

  • Desktop web: an ad roughly every 7 posts in the News Feed, plus two ads in the right side bar. The first ad was the first post on the page
  • iOS app: an ad roughly every 12 posts, with the first ad being the second post in the News Feed
  • iOS web: An ad roughly every 10 posts, with the first ad being the fourth post in the News Feed
  • Android app: an add roughly every 10-12 posts, with the first ad being the second post in the News Feed
  • Instagram on iOS: the fourth post and roughly every 10th post after that were ads.

That’s pretty saturated. You might argue that Facebook could raise the density of ads on mobile to match desktop density (every 7 rather than every 10-12), but of course on mobile the ad takes up the full width of the screen (and often much of the height too), which means the ceiling is likely lower on mobile. I’m sure Facebook has done a lot of testing of the tipping point at which additional ads deter usage, and I would imagine we’re getting close to that point now. So this is a real issue Facebook is going to be dealing with. I did wonder to what extent this is a US issue – in other words, whether ad loads might be lower elsewhere in the world due to lower demand. But on the Q2 earnings call, Facebook said that there aren’t meaningful differences in ad load by geography, so this is essentially a global issue.

So, then, if this ad load issue is real, what are the implications for Facebook’s ad revenue growth? Well, Facebook’s ad revenue has grown by 57-63% year on year over the past four quarters, and increasing ad load is clearly accounting for some of the growth, but much of it is accounted for by the other factors in our equation. Strip that ad load effect out and growth rates could drop quite a bit, by anywhere from 10-30 percentage points. Facebook could then be left with 30-50% year on year growth without a contribution from ad load. Even at the lower end of that range, that’s still great growth, while at the higher end it’s amazing growth. But either would be lower than it has been recently.

Of course, it’s also arguable that capping ad load would constrain supply of ad space, which could actually drive up prices if demand remains steady or grows (which Facebook is certainly forecasting). Facebook has dismissed suggestions in the past that it would artificially limit ad load to drive up prices, but this is a different question. Supply constraints could offset some of the slowing contribution from ad load itself, though how much is hard to say.

Ad revenue growth from outside the News Feed

Of course, Facebook isn’t limited to simply showing more ads in the Facebook News Feed. While overall impressions actually fell from Q4 2013 to Q3 2015 as usage shifted dramatically from desktop to mobile, where there are fewer ads, total ad impressions have been up by around 50% year on year in the last three quarters. Much of that growth has been driven by Instagram, which of course has ramped from zero to the significant ad load I just described over the course of the last three years. Multiplied by Instagram user growth (which isn’t included in Facebook’s MAU and DAU figures) and that’s a significant contribution to overall ad growth too. As I understand it, the ad load comments apply to Instagram too, but there will still be a significant contribution to overall ad revenue growth from user growth.

And then there are Facebook’s other properties which until today haven’t shown ads at all: Messenger and WhatsApp. As of today, Facebook Messenger is going to start showing some ads, and that will be another potential source of growth going forward. WhatsApp may well do something similar in future, too, although Zuckerberg will have to overcome Jan Koum’s well-known objections first.

Growth beyond ad revenue

And then we have growth from revenue sources other than ads. What’s been striking about Facebook over the last few years – even more than Google – is how dominated its revenues have been by advertising. The proportion has actually risen from a low of 82% of revenue in Q1 2012 all the way back up to 97.2% in Q3 2016. It turns out that the increasing contribution from other sources was essentially down to the FarmVille era, with Zynga and other game companies generating revenues through Facebook’s game platform. What’s even more remarkable here is that these payments are still the bulk of Facebook’s “Payments and other fees” revenues today, as per the 10-Q:

…fees related to Payments are generated almost exclusively from games. Our other fees revenue, which has not been significant in recent periods, consists primarily of revenue from the delivery of virtual reality platform devices and related platform sales, and our ad serving and measurement products. 

As you can see in the second half of that paragraph, Facebook anticipates generates some revenue from Oculus sales going forward, though it hasn’t been material yet, and later in the 10-Q the company suggests this new revenue will only be enough to (maybe) offset the ongoing decline in payments revenue as usage continues to shift from desktop to mobile.

Of course, Facebook now has its Workplace product for businesses too, which doesn’t even merit a mention in this section of the SEC filing. Why not? Well, it would take 33 million active users to generate as much revenue from Workplace in a quarter as Facebook currently generates from Payments and other fees. It would take 12 million active users just to generate 1% of Facebook’s overall revenues today. And that’s because Facebook’s ad ARPU is almost $4 globally per quarter, and $15 in the US and Canada. Multiplied by 1.8 billion users, it’s easy to see why Workplace at $1-3 per month won’t make a meaningful contribution anytime soon.

Conclusion: a fairly rosy future nonetheless

In short, then, Facebook is likely going to have to make do with ad revenue for the vast majority of its future growth. That’s not such a bad thing, though – as we’ve already seen, the other drivers of ad revenue growth from user growth to price per ad to time spent by users are all still significant drivers of growth in the core Facebook product, and new revenue opportunities across Instagram, Messenger and possibly WhatsApp should contribute meaningfully as well. That’s not to say that growth might not be slower, and possibly quite a bit slower, than in the recent past. But at 30% plus, Facebook will still be growing faster than any other big consumer technology company.

Apple, Microsoft, and the Future of Touch

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website. If you want to contact me directly, you’ll find various ways to do so here.

This is one of those rare weeks when two of the tech industry’s major players have back to back events and in the process illustrate their different takes on an important product category, in this case the PC. I’ve already written quite a bit about all this this week:

Now that it’s all done, though, I wanted to pull some of these themes and threads together. I attended today’s Apple event in person and so I’ve spent time with the new MacBooks, though not with Microsoft’s new hardware or software.

Differentiation: from hardware advantages to philosophical approaches

The biggest thing to come out of this week, which I previewed in my Techpinions piece on Monday, was a shift from hardware advantages to philosophical differences as the nexus of competition between Microsoft and Apple in PCs. MacBooks once enjoyed significant hardware advantages over all competing laptops in terms of battery life, portability, and features such as trackpads, but in recent years those advantages have all but disappeared. Instead, what we’re left with is increasingly stark philosophical differences in how these companies approach the market, and this week the focus was on touch.

Microsoft’s computing devices all run some flavor of Windows 10 and feature touch. Apple, on the other hand, continues to draw a distinction between two sets of products by both operating system and interactivity. On the one hand, you have iOS devices with touch interfaces, and on the other macOS devices with more indirect forms of interactivity. Today’s event saw Apple introduce an interesting new wrinkle to touch on the MacBook with the Touch Bar, but it’s clearer than ever that Apple refuses to put touch screens on the Mac and that won’t change soon.

Microsoft’s approach makes touch available everywhere, even when in many cases it doesn’t make sense. It’s optional, though, and Microsoft has pulled back from some of the earlier erroneous over-reliance on touch that characterized Windows 8. Apple, on the other hand, wants to largely preserve existing workflows based on mouse and keyboard interactivity while adding subtle new forms of interaction. It keeps all the interaction on the horizontal plane, while Microsoft has users switching back and forth between the tabletop and display planes. There isn’t necessarily a right and wrong here – both approaches are interesting and reflect each company’s different starting points and perspectives. But it’s differences like this that will characterize the next phase of competition between them.

In some ways, this new phase of competition is analogous to the competition between Apple and Google in the smartphone market. In both cases, there are now devices made by companies other than Apple which match Apple’s core hardware performance. That’s not to say that all devices now come up to Apple’s standards – it continues to compete only at the high end, while both Google and Microsoft’s ecosystems serve the full gamut of needs from cheap and cheerful to high-priced premium. But in smartphones as in PCs, the focus of competition at the high end is now moving to different approaches rather than hardware performance. It’s intriguing, then, that it’s during this era that both Google and Microsoft are finally getting serious about making their own hardware.

wall-image-strip-560

The Touch Bar itself is very clever. Apple made the decision to spend a lot of time in today’s event on demos, and I think that was a good use of the time (especially in an event with less ground to cover than most). The demos really showed the utility that the Touch Bar can provide in a variety of Apple and third party apps. What Apple has done here is in essence to take a slice of the screen and put it down within reach to allow you to interact with it. There will definitely be a learning curve involved here – I can see users forgetting that it’s there unless they make an effort to use it, but I can also see it prompting users to try to touch the screen (this happened to me in the demo area). “Touch here but not there” will be an interesting mental model to adapt to, but once users get the hang of it (and developers support it in their apps) I believe it will add real value.

Apple’s price coverage

Of course, MacBooks aren’t the only portable computers Apple makes, and it’s been increasingly making the case that the iPad Pro lineup should be considered computers too. These are Apple’s touch-screen computers, but in most consumers minds they don’t yet belong in the same category as Windows laptops. However, when you put the new MacBooks, older MacBooks, and iPad Pros together, you get an interesting picture in terms of price and performance coverage. The chart below shows base pricing for each of these products:

Apple Computer Portfolio

As you can see, there’s pretty good coverage from $599 all the way through $2399 with just the base prices. If you were to add storage and spec options (and Smart Keyboards in the case of the iPad Pros) the in between price points would be covered pretty well too. But Apple now offers a portable computer at almost any price point in this range, and that’s interesting. The newest MacBooks alone do a nice job of covering the spread from $1199 to $2399 with increasing power and capability, while the older MacBooks fill in some gaps. There’s no denying that these products are premium, but they extend down into price points that many people will be able to reach, while providing really top notch products for those that can afford or justify them. If you focus on those newer devices, I think this is the most coherent and logical MacBook portfolio Apple has had for years.

The next big question is what happens with desktops, because those are now from one to three years old, with no sign of an update. The one that’s had the most focus from Apple in recent years is the iMac, which is both the most mass market and the flashiest – it’s the only one that is highly visible, while both the Mac Pro and Mini could feasibly sit hidden under a desk. I don’t think Apple’s going to discontinue these anytime soon, but the timing of its lack of focus on these devices is providing an interesting window for Microsoft.

A few words on creativity

I won’t repeat everything I said in my earlier stuff on Microsoft’s event here, but suffice it to say that this creativity push is certainly interesting given that timing I just mentioned. However, it’s totally overblown to be talking about Microsoft somehow stealing away Apple’s creative customer base, for several reasons:

  • First, Apple has long since expanded beyond that base, especially if you look at the full set of devices including iPhones. Apple clearly isn’t selling hundreds of millions of iPhones solely to people that use Photoshop for a living. Even if you look at Mac buyers, they’re much broader than the cliche of ad agency creatives and video editors.
  • Secondly, all Microsoft has done so far is put a stake in the ground. The Surface Studio is a beautiful device and a well thought out machine for a subset of creative professionals. But workflows don’t change overnight just because a new computer comes along, especially if there’s an existing commitment to another ecosystem. The role of this device is to signal to creatives that Microsoft is serious about serving them, which is notable in its own right, but won’t sell millions of devices by itself.
  • Thirdly, Microsoft’s bigger creativity push is around software, with 400m plus Windows 10 users getting a bunch of new creativity software in the Creators Update in the spring. This will be much more meaningful in terms of spreading that creativity message far and wide than the new hardware.
  • Lastly, even with all this, Microsoft’s efforts to associate its brand with creativity and not just productivity will take years to take hold. Perceptions don’t change overnight either.

Apple’s event today was a nice reminder that it still takes these creative professionals very seriously – both the Adobe and DJ Pro demos were creativity-centric, and these new machines are clearly intended for creative professionals among others (the RAID arrays would be an obvious fit for people editing high-bandwidth video, for example). Apple isn’t going to cede this ground easily, but it will be very interesting to watch over the next few years how this aspect of the competition plays out.

 

Twitter’s Terrible New Metric

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website. If you want to contact me directly, you’ll find various ways to do so here.

In the shareholder letter that accompanied Twitter’s Q3 earnings today, the company said:

consider that each day there are millions of people that come to Twitter to sign up for a new account or reactivate an existing account that has not been active in the last 30 days.

That sounds great, right? Progress! And yet this very metric is the perfect illustration of why Twitter hasn’t actually been growing quickly at all. Let’s break it down:

  • Starting point: “each day there are millions of people” – so that’s at least 2 million per day every day
  • There are ~90 days in a quarter, so 2 million times 90 is 180 million, all of whom count as MAUs in the respective months when they engage in this behavior, and could be potential MAUs for the quarter if they stick around for a couple of months
  • Over the course of this past quarter, Twitter only added 4 million new MAUs
  • That implies one of two things: either 2.2% or less (4/180) of that 180 million actually stuck around long enough to be an MAU at the end of the quarter, or a very large proportion of those who had been active users at the end of last quarter left
  • In fact, it might even get worse. Based on the same 2m/day logic, 60 million plus people become MAUs every month on this basis, meaning this behavior contributes at least 60 million of Twitter’s MAUs each quarter (quarterly MAUs are an average of the three monthly MAU figures) even if all 60 million never log in again. On a base of just over 300 million, that means around a fifth of Twitter’s MAUs each month are in this category
  • Bear in mind throughout all this that I’m taking the bear minimum meaning of “millions” here – 2 million. The real numbers could be higher.

In other words, this metric – which is intended to highlight Twitter’s growth opportunity – actually highlights just how bad Twitter is at retaining users. Because Twitter doesn’t report daily active users or churn numbers, we have to engage in exercises like this to try to get a sense of what the true picture looks like. But it isn’t pretty.

Why is retention so bad? Well, Twitter talked up a new topic-based onboarding process in its shareholder letter too. In theory, this should be helping – I’ve argued that topic-based rather than account-based follows are actually the way to go. But I signed up for a new test account this morning to see what this new onboarding process looks like, and the end results weren’t good.

Here’s what the topic based onboarding process looks like:

topics-560

So far, so good – I picked a combination of things I’m really interested in and a few others just to make sure there were a decent number of topics selected. I was also asked to upload contacts from Gmail or Outlook, which I declined to do because this was just a test account. I was then presented with a set of “local” accounts (I’m currently in the Bay Area on a business trip so got offered lots of San Francisco-based accounts including the MTA, SFGate, and Karl the Fog – fair enough). I opted to follow these 21 accounts as well, and finished the signup process. Here’s what my timeline looked like when I was done:

timeline-560

It’s literally empty – there is no content there. And bizarrely, even though I opted to follow 21 local accounts, I’m only shown as following 20 here. As I’m writing now, it’s roughly an hour later and there are now 9 tweets in that timeline, three each from TechCrunch and the Chronicle, and several others. This is a terrible onboarding experience for new users – it suggests that there’s basically no content, even though I followed all the suggested accounts and picked a bunch of topics. Bear in mind that I’m an avid Twitter user and a huge fan of the service – it provides enormous value to me. But based on this experience I’d never come away with that impression. No wonder those millions of new users every day don’t stick around. Why would you?

In that screenshot above, the recommendation is to “Follow people and topics you find interesting to see their Tweets in your timeline”. But isn’t that what I just did? As a new user, how do I feel at this point? And how do I even follow additional topics from here (and when am I going to see anything relating to the topics I already said I was interested in)? Twitter is suggesting even more SF-centric accounts top right, along with Ellen, who seems to be the vanilla ice cream of Twitter, but that’s it. If I want to use Twitter to follow news rather than people I know, which is how Twitter is increasingly talking about itself, where do I go from here?

I hate beating up on the companies I follow – I generally try to be more constructive than this, because I think that’s more helpful and frankly kinder. But I and countless others have been saying for years now that Twitter is broken in fundamental ways, and there are obvious solutions for fixing it. Yet Twitter keeps going with this same old terrible brokenness for new users, despite repeated promises to fix things. This, fundamentally, is why Twitter isn’t growing as it should be, and why people are losing faith that it will ever turn things around.

AT&T Doubles Down on the Ampersand

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website.

I recently spent a couple of days with AT&T as part of an industry analyst event the company holds each year. It’s usually a good mix of presentations and more interactive sessions which generally leave me with a pretty good sense of how the company is thinking about the world. Today, I’m going to share some thoughts about where the consumer parts of AT&T sit in late 2016, but I’m going to do so with the shadow of a possible Time Warner merger looming over all of this – something I’ll address at the end. From a consumer perspective the two major themes that emerged from the event for me were:

  • AT&T now sees itself as an entertainment company
  • AT&T is doubling down on the ampersand (&).

Let me explain what I mean by both of those.

AT&T as an entertainment company

The word “entertainment” showed up all over the place at the event, and it’s fair to say it’s becoming AT&T’s new consumer identity. From a reporting perspective, the part of AT&T which serves the home is now called the Entertainment Group, for example, and CEO Randall Stephenson said that was no coincidence – it’s the core of the value proposition in the home now. But this doesn’t just apply to the home side of the business – John Stankey, who runs the Entertainment Group, said at one point that “what people do on their mobile devices will be more and more attached to the emotional dynamics of entertainment” too.

That actually jibes pretty closely with something I wrote in my first post on this blog:

There are essentially five pieces to the consumer digital lifestyle, and they’re shown in the diagram below. Two of these are paramount – communications and content. These are the two elements that create emotional experiences for consumers, and around which all their purchases in this space are driven, whether consciously or unconsciously.

What’s fascinating about AT&T and other telecoms companies is that the two things that have defined them throughout most of their histories – connectivity and communications – are taking a back seat to content. People for the most part don’t have emotional connections with their connectivity or their devices – they have them with the other people and with the content their devices and connectivity enable them to engage with. AT&T seems to be betting that being in the position of providing content will create stickier and more meaningful relationships which will be less susceptible to substitution by those offering a better deal. And of course video is at the core of that entertainment experience.

The big question here, of course, is whether this is how consumers want to buy their entertainment – from the same company that provides their connectivity. AT&T is big on the idea that people should be able to consume the content of their choice on the device of their choice wherever they choose. On the face of it, that seems to work against the idea that one company will provide much of that experience, and I honestly think this is the single biggest challenge to AT&T’s vision of the future and of itself as an entertainment company. But this is where the ampersand comes in.

Doubling down on the ampersand

One of the other consistent themes throughout the analyst event was what AT&T describes as “the power of &”. AT&T has actually been running a campaign on the business side around this theme, but it showed up on the consumer side of the house too at the event. Incidentally, I recalled that I’d seen a similar campaign from AT&T before, and eventually dug up this slide from a 2004 presentation given by an earlier incarnation of AT&T.

But even beyond this ad campaign, AT&T is talking up the value of getting this and that, and on the consumer side this has its most concrete instantiation in  what AT&T has done with DirecTV since the merger. This isn’t just about traditional bundling and the discounts that come with it, but about additional benefits you get when you bundle. The two main examples are the availability of unlimited data to those who bundle AT&T and DirecTV, and the zero-rating of data for DirecTV content on AT&T wireless networks. Yes, AT&T argues, you can watch DirecTV content on any device on any network, but when you watch it on the AT&T network it’s free. The specific slogan here was “All your channels on all your devices, data free when you have AT&T”.

The other aspect here is what I call content mobility. What I mean by that is being able to consume the content you have access to anywhere you want. That’s a given at this point for things like Netflix, but still a pretty patchy situation when it comes to pay TV, where rights often vary considerably between your set top box, home viewing on other devices, and out-of-home viewing. The first attempts to solve this problem involved boxes – VCRs and then DVRs for time shifting, and then the Slingbox for place shifting. But the long term solution will be rooted in service structure and business models, not boxes. For example, this content mobility has been a key feature of the negotiations AT&T has been undertaking both as a result of the DirecTV merger and in preparation for its forthcoming DirecTV Now service. It still uses a box – the DirecTV DVR – where necessary as a conduit for out-of-home viewing where it lacks the rights to do so from the cloud, but that’s likely temporary.

AT&T’s acquisition of DirecTV was an enabler of both of these things – offering zero rating as a benefit of a national wireless-TV bundle, and the negotiating leverage that comes from scale. It also, of course, gained access to significantly lower TV delivery costs relative to U-verse.

Now, the big question is whether consumers will find any of this compelling enough to make a big difference. I’m inherently skeptical of zero rating content as a differentiator for a wireless operator – even if you leave aside the net neutrality concerns some people have about it, it feels a bit thin. What actually becomes interesting, though, is how this allows DirecTV to compete against other video providers – in a scenario where every pay TV provider basically offers all the same channels, this kind of differentiation could be more meaningful on that side of the equation. If all the services offer basically the same content, but DirecTV’s service allows you to watch that content without incurring data charges on your mobile device, that could make a difference.

Context for AT&T&TW

So let’s now look at all of this as context for a possible AT&T-Time Warner merger (which as I’m finishing this on Saturday afternoon is looking like a done deal that will be announced within hours). One of the slides used at the event is illustrative here – this is AT&T’s take on industry dynamics in the TV space:

ATT TV industry view

Now focus in on the right side of the slide, which talks about the TV value chain compressing:

ATT TV compression

The point of this illustration was to say that the TV value chain is compressing, with distributors and content owners each moving into each other’s territory. (Ignore the logos at the top, at least two of which seem oddly out of place). The discussion around this slide went as follows (I’m paraphrasing based on my notes):

Earlier, there were discrete players in different parts of the value chain. That game has changed dramatically now – those heavy in production are thinking about their long-term play in distribution. Those who distribute are thinking about going back up the value chain and securing ownership rights. Premium content continues to play a role in how people consume network capacity. Scale and a buying position in premium content is therefore essential.

In addition, AT&T executives at the event talked about the fact that the margins available on both the content and distribution side would begin to collapse for those only participating on one side as players increasingly play across both.

The rationale for a merger

I think a merger with Time Warner would be driven by three things:

  • A desire to avoid being squeezed in the way just described as other players increasingly try to own a position in both content ownership and distribution – in other words, be one of those players, not one of their victims
  • A furthering of the & strategy – by owning content, AT&T can offer unique access to at least some of that content through its owned channels, including DirecTV and on the AT&T networks. This is analogous to the existing DirecTV AT&T integration strategy described above
  • Negotiating leverage with other content providers and service providers.

Both the second and third of these points would also support the content mobility strategy I described earlier, providing both leverage with content owners and potentially unique rights to owned content.

How would AT&T offer unique content? I don’t think it would shut off access to competitors, but I could see several possible alternatives:

  • Preserving true content mobility for owned channels – only owned channels get all rights for viewing Time Warner content on any device anywhere. Everyone else gets secondary rights
  • Exclusive windows for content – owned channels like DirecTV and potentially AT&T wireless would get early VOD or other access to content, for example immediate VOD viewing for shows which don’t show up for 24 hours, 7 days etc on other services
  • Exclusive content – whole existing shows and TV channels wouldn’t go exclusive, but I could see exclusive clips and potentially new shows go exclusive to DirecTV and AT&T.

The big downside with all this is that whatever benefits AT&T offers to its own customers, by definition it would be denying those benefits to non-customers. That might be a selling point for DirecTV and AT&T services, but wouldn’t do much for Time Warner’s content. The trends here are inevitable, with true content mobility the obvious end goal for all content services – it’s really just a matter of time. To the extent that AT&T is seen to be standing in the way of that for non-customers, that could backfire in a big way.

On balance, I’m not a fan of the deal – I’ve outlined what I see as the potential rationale here, but I think the downsides far outweigh the upsides. Not least because the flaws in Time Warner’s earlier mega-merger apply here too – since you can never own all content, but just a small slice, your leverage is always limited. What people want is all the relevant content, not just what you’re incentivized to offer on special terms because of your ownership structure. I’ll wait and see how AT&T explains the deal to see if the official rationale makes any more sense, but I suspect it won’t change much.