Friday, December 19, 2014

T-Mobile US Plans LTE over Wi-Fi, Possibly in 2015

Long Term Evolution over Wi-Fi represents one more way mobile and other service providers are combining licensed and non-licensed spectrum assets to underpin their businesses.

As planned, T-Mobile US will use 5-GHz Wi-Fi spectrum to complement its primary use of licensed 4G LTE spectrum, probably mostly in high-density areas, and primarily at peak hours of mobile usage in those areas.

LTE Advanced over Wi-Fi is seen as an approach well suited to small cells that mobile operators plan to deploy in high-density areas, primarily to support higher bandwidth in the downstream.

As mobile operators have learned to rely on offloading data demand to unmanaged Wi-Fi access, they also will be able, using LTE over Wi-Fi, to augment LTE operations based on use of licensed spectrum.

Other advantages include a more seamless end user experience, as users will not actively have to toggle back and forth between the mobile data network and local Wi-Fi. User experience also should be more consistent.

T-Mobile US is expected to add LTE over Wi-Fi as soon as 2015, a way of enhancing its LTE network bandwidth. You might call LTE Advanced over Wi-Fi an example of “Wi-Fi also,” a primary reliance on the mobile network, but augmented by managed Wi-Fi, compared to the way other service providers approach Wi-Fi.

France’s Free Mobile, U.S. mobile providers Republic Wireless and Scratch Wireless, for example, take an approach we might call “Wi-Fi first,” preferring that users connect to Wi-Fi as a first choice, then default to the mobile network only when Wi-fi is not available.

Eventually, firms such as Comcast are likely to follow that same approach when launching mobile service. Google also is said to have looked at the idea.

To be sure, whether public Wi-Fi can compete with mobile has been asked for a decade and a half. Until recently, the answer has been “not yet.” The question was asked of 3G networks and now is asked about 4G networks.

Arguably, two major hurdles will have to be overcome, first, ubiquity of public Wi-Fi access and second, the business model.

For the moment, ubiquity remains the biggest challenge, as it remains difficult to ensure coverage, let alone roaming, on public Wi-Fi today, outside the fairly limited universe of cafes, malls, hotels, airports and other areas where there is high pedestrian traffic.

So far, no service providers have been brave enough to try a “Wi-Fi-only” approach for mobile phone service. Also, virtually all mobile service providers now encourage use of unmanaged Wi-fi “sometimes” for Internet apps.

In other words, no service providers have tried to build a mobile service exclusively on untethered access (hotspot based). The “safest” model for a non-facilities-based provider is “Wi-Fi primary, mobile network secondary.”

T-Mobile US likely will be first in the U.S. market to implement an approach that might be called “mobile first, managed Wi-Fi second.”

In the future, there might be other possibilities. Content consumption already dominates real-time communications as a lead app for mobile and untethered devices. Many tablet owners find “Wi-Fi first” a quite acceptable connectivity choice.

As content consumption grows, on all devices, it is possible a new market niche could develop, even for “mobile phone” service.

One might argue such concepts have been tried before, as with the Personal Handy Phone System. There was some thinking such a service might also develop in the United States, around the time Personal Communications Service spectrum was awarded in the 1.8 GHz band.

As it turned out, PCS wound up being “cellular telephone service.” But all that was before the Internet, before broadband, before the rise of Internet-based content consumption.

It is hard to tell whether all those changes, plus the advent of smartphones and tablets, small cells and more public Wi-Fi, will finally enable a Wi-Fi-only approach to services that appeal to a large base of consumers.

Google Delays Next Google Fiber Decisions

Google is delaying any announcement of  the next possible Google Fiber markets until 2015, a move that is not necessarily unusual, for a firm new to the business of building expensive and laborious local access networks.

The delay affects Portland, Ore., in addition to other areas including  Atlanta, Charlotte, Nashville, Phoenix, Raleigh-Durham, Salt Lake City, San Antonio and San Jose.

To be sure, a number of other potential Google Fiber gigabit network deployments also seem to be on hold, at least until early 2015.

In February 2014, Google Fiber announced that it was exploring further expansion into nine metro markets, adding to operations in Kansas City; Austin, Texas; and Provo, Utah.

At least in part, Google might want to spend a bit more time figuring out how to efficiently build multiple networks at once. There is a learning curve, even for experienced suppliers of optical fiber access networks, and Google might want to ensure it has optimized its processes.

But there might also be other complications. In at least one of the potential markets--Portland, Ore.-- unfavorable tax laws might soon be revised.

Tax rate uncertainty or high tax rates almost always have the effect you would expect, namely a more-cautious attitude towards investment.

But scaling major construction projects efficiently, in jurisdictions with differing rules, also is an issue. Even veteran companies with a long history of local access network construction have found there is an experience curb for fiber-to-home projects.

“I joined AT&T in 2008 and I remember around 2012 looking at some charts and the cost of speed hadn’t really had a breakthrough, because 80 percent of your deployment in broadband is labor based,” said John Donovan, AT&T senior executive vice president for architecture, technology and operations.

“And then all of a sudden you have vectoring in small form factor stuff and all of a sudden a little bit of an investment by our supply chain a few standard things and we start to take a 25 meg on a copper pair and then we move it to 45 and then 75 and then 100 which is on the drawing board,” said Donovan.

The point is that the underlying technology used by cable TV operators and telcos has been continually improved, providing better performance at prices useful for commercial deployment.

Operating practices also are becoming more efficient. Google Fiber has been able to work with local governments to streamline permitting processes and other make-ready work in ways that can lower costs to activate a new Internet access network using fixed media.

Google Fiber also pioneered a new way of building networks, getting users to indicate interest before construction starts, and building neighborhood by neighborhood, instead of everywhere in a local area.

That changes gigabit network economics. As has been true for nearly a couple of decades in the U.S. market, for example, competitive suppliers have been able to “cherry pick” operations, building only enough network to reach willing customers, without the need to invest capital in networks and elements that “reach everyone.”

That makes a big difference in business models. A network upgrade that might not have made sense if applied across a whole metro network might well make sense in some parts of a city, where there is demand.

Also, every new supplier of Internet access goes through a learning curve, generally operating inefficiently at first, but improving as experience is accumulated.

“And then we are getting better at the deployment side of the business as well,” said Donovan. “So our average technicians and our best technicians are converging.”

It is possible Google simply wants to be sure it can build in multiple areas effectively and efficiently. But a favorable change in tax laws in Oregon might also be an issue.

Some might conclude that Google, perhaps under pressure to control costs and improve profit margins, might also be evaluating how much money it wants to spend on Google Fiber, as well.

IoT Market Won't Be as Big as Forecast, Near Term

Despite a recent change of terminology--”Internet of Things” being preferred over the original term “machine to machine,”--it is likely that most of the incremental application and access revenue generated in the broad IoT markets will be created by M2M applications such as sensors.

Perhaps notably, Gartner in 2014 named “Internet of Things” at the peak of the hype cycle, after noting in 2012 nd 2013 that the peak of  unrealistic expections was approaching.

What that suggests is that observers soon will become aware that progress is not as rapid as once believed, deployments occur much more slowly than expected and many even begin to doubt the size of the market opportunity.

Eventually--and that could mean a decade or more--IoT has a shot at being as transformational as many now expect. But it is almost certain a period of disillusionment is coming: it nearly always does, when new technologies appear.

Important innovations in the communications business often seem to have far less market impact than expected, early on.

Even really important and fundamental technology innovations (steam engine, electricity, automobile, personal computer, World Wide Web) can take much longer than expected to produce measurable changes.

Quite often, there is a long period of small, incremental changes, then an inflection point, and then the whole market is transformed relatively quickly, but only after a long period of incremental growth.

Mobile phones and broadband are among the two best examples. Until the early 1990s, few people actually used mobile phones, as odd as that seems now.

Not until about 2006 did 10 percent of people actually use 3G. But mobiles relatively suddenly became the primary way people globally make phone calls and arguably also have become the primary way most people use the Internet, in term of instances of use, if not volume of use.

Prior to the mobile phone revolution, policy makers really could not figure out how to provide affordable phone service to billions of people who had “never made a phone call.”

That is no longer a serious problem, and the inflection point everywhere in the developing world seems to have happened between 2002 and 2003.

Before 2003, one could assume that most people in the developing world could not make a phone call easily.

A decade later, most people use mobile phones. That would have been impossible to envision, in advance of the reaching of the inflection point.

That likely will be the case for IoT as well.

On a global basis, manufacturers will invest $140 billion in Internet of Things solutions between 2015 and 2020, a study by Business Insider suggests.

Likewise, the Internet of things and the technology ecosystem surrounding it are expected to be an $8.9 trillion market in 2020, according to IDC.

Those forecasts, history suggests, will prove inaccurate, in the near term.

IDC said the installed base of connected things will be 212 billion by the end of 2020, including 30.1 billion connected autonomous things (devices and sensors working independently of any human interaction).  

IDC estimates IoT spending at $4.8 trillion in 2012 and expects the market to be $8.9 trillion in 2020 and have a compound annual growth rate of 7.9 percent.

Manufacturers will be the earliest adopters of IoT solutions and will invest heavily in new IoT solutions for factory floors, IDC predicts.

About 17 percent of automotive companies are using IoT devices in the production of their vehicles, for example.

IoT likely will be quite significant, eventually. But near term progress is likely to disappoint.

Thursday, December 18, 2014

U.K. Mobile Investment in "Notspots" Also Not Profitable?

U.K. mobile operators have agreed to invest £5 billion to alleviate coverage "notspots" acorss the United Kingdom by 2017, ensuring voice and text messaging access on the part of the biggest four mobile operators across 90 percent of the U.K. land mass by 2017.


Had the voluntary agreement not been reached, the U.K. government was prepared to force mobile operators into mandatory roaming agreements.


The new investments will extend coverage from all four mobile operators to 85 percent of geographic areas by 2017, up from 69 percent at present.


As with most universal service requirements, it is not clear the new coverage actually will have a payback. That is generally why the notspots exist in the first place. Assume a rural tower with a transmitting radius of 1.5 miles, representing nine square miles of surface area, in an area with less than 20 homes per square mile.


At 10 homes per square mile, total addressable locations are 90 locations. Assume two persons per home, or 180 potential accounts.


Assume the four contestants decide to collaborate on tower sites, resulting in a situation where two mobile providers share a single tower, with a subsequent need for two new sets of towers to serve each rural area.


Assume market share winds up roughly split on each of the two new sets of towers. Assume 100 percent take rates, implying 90 accounts per tower, and average line revenue of about GBP21 (USD33).


That suggests per-tower revenue of about $2970 a month, a blended rate including both prepaid and postpaid accounts.


Assume monthly tower rents are $3800 ($1990 per carrier, two carriers per tower). You see the problem: revenues do not cover the cost of tower site leasing.


But perhaps 75 percent of U.K. mobile towers are owned, not leased. In that case the better comparison is the cost of building and operating a tower.


Assume a cost of about $150,000 to create a tower site. If that enables $35,640 in new revenue, the business model works, even at 10 percent interest rates. The problem, of course, is that the new investment will not produce that amount of incremental revenue.


Most customers already buy service, even if that service is affected by limited coverage, as the big problem is partial coverage, not complete lack of coverage. So it would not be unreasonable to assume single digit incremental customer revenues. In that case, the payback might never be obtained.

But that is true of most universal service investments. By definition, there “is no business model.” As always, service providers will rely on profits from urban cells to essentially subsidize rural cells that might actually lose money.

Wednesday, December 17, 2014

At least 59% of U.S. Households Can Buy 100 Mbps Internet Access

At least 59 percent of U.S, residents could buy Internet access service at a minimum speed of 100 Mbps, at the end of 2013, according to a report by the U.S. Department of Commerce. That probably is worth keeping in mind the next time you see or hear it said that U.S. high speed access is “behind,” or “worse” than some other countries.

But that statistic raises questions. Just how much bandwidth do most U.S. consumers really need? And how much bandwidth is required so that user experience is not impaired?

Without in any way suggesting there is anything wrong with a “more is better” approach, or suggesting that demand is growing, what is needed to support the everyday applications that users on fixed or mobile networks typically undertake?

Basically, peak bandwidth requirements are driven by the number of people sharing a single account, the amount of time that each user spends using the Internet, the number of simultaneous users and the types--and numbers--of apps they use when online. Application bandwidth requirements also are an issue.

The Federal Communications Commission currently suggests a single user can manage without difficulty, when watching entertainment video, at about 0.7 Mbps for “standard definition steaming videos.”

A single user requires 4 Mbps for high-definition quality streaming video.

Speeds between 1 Mbps and 2 Mbps are enough to support as many as three simultaneous users interacting with email, surfing the web or watching standard-definition streaming video.

But requirements are more complex as higher-bandwidth applications and the number of users grow. When four or more people are using a shared connection, and HD streaming, video conferencing or online gaming are used by multiple users, requirements can jump to 15 Mbps fairly quickly.

Netflix, for example, recommends speeds between 0.5 and 25 Mbps, depending on image quality, with 4K video imposing the highest loads.

As a baseline, the Australian National Broadband Network’s goal is to supply all homes and businesses with downstream speeds of at least 25 megabits per second, by 2020, with a majority of premises in the fixed line footprint getting downstream speeds of at least 50 megabits per second.

But NBN also will be using satellite delivery and fixed wireless platforms that will not deliver such speeds.

So demand assumptions matter. Single-person households likely will not be an issue, but multi-person households represent more complex challenges. On average, Australian Internet households represent about 2.1 people per household.

A new study by Ofcom, the U.K. communications regulator, found that “access speed” matters substantially at downstream speeds of 5 Mbps and lower. In other words, “speed matters” for user experience when overall access speed is low.

For downstream speeds of 5 Mbps to 10 Mbps, the downstream speed matters somewhat.

But at 10 Mbps or faster speeds, the actual downstream speed has negligible to no impact on
end user experience.

Since the average downstream speed in the United Kingdom now is about 23 Mbps, higher speeds--whatever the perceived marketing advantages--have scant impact on end user application experience.

Some 85 percent of U.K. fixed network Internet access customers have service at 10 Mbps or faster, Ofcom says.

So paradoxically, as there is increased focus on gigabit networks and services running at hundreds of megabits, more commonly, there remains little evidence, so far, that it actually makes a difference to most end users, in terms of better user experience.

What faster speeds might do is turn a spotlight on all the other elements of the delivery chain that impede experience, ranging from devices and Wi-Fi to far-end servers and backbone network interconnections.

None of that is obvious to end users, and little of the reality matters for marketing platforms, which suggest “more is better.” Of course, in the end, more is better. Up to a point.

NBN Goes Multi-Platform: Fiber to Where It Can Make Money

The Australian National Broadband Network, building a wholesale-only high speed access network across Australia, believes a new “all of the above” or multi-platform approach will accelerate the construction timetable by four years.

That is a practical investment approach some service providers have for years dubbed "fiber to where you can make money." In other words, the amount of investment is directly related to expected return on investment.

NBN also revised its deal with Telstra, the former incumbent operator, has NBN Co. progressively taking ownership of elements of Telstra’s copper and hybrid fiber coax  (HFC) networks in those parts of the country where it represents the fastest and most cost effective way to deliver fast broadband to families and businesses.

The original agreements between NBN and Telstra in June 2011 gave NBN Co access to Telstra assets such as ducts, pits and exchanges to use in the rollout the NBN, but not access to Telstra’s copper or HFC assets.

The latest deal is part of a lengthy and contentious struggle between Telstra and the government about the NBN. Telstra was no more anxious than most tier one fixed network owners to sell its fixed network assets, and become a non-facilities-based service provider.

On the other hand, Telstra eventually agreed to sell its fixed assets, in exchange for greater freedom in mobile services, and a great deal of cash. Precisely how much Telstra will be paid for decommissioning its access network and becoming a wholesale customer of the NBN is not known. But there is speculation the amount could be as much as AU$100 billion over a period of 55 years.   

NBN’s business plan is highly dependent on revenue assumptions. Among the key elements is a doubling of average revenue per user over a decade and limited competition from mobile or untethered alternatives.

NBN is betting that wireless-only households not buying a fixed line service will not exceed 13.5 percent of total high speed connections to 2041. Doubters likely will focus more on the latter assumption as problematic. Overly-optimistic revenue assumptions have been a problem for the NBN in the past.

And competition could well emerge as a key problem.

But demand assumptions also might be an issue. Single-person households likely will not be an issue. But multi-person households could be another issue. On average, Australian Internet users average about 2.1 people per household.

But as with all statistics related to the Internet, “average” might not be so useful, representing a mix of single-person households and families or multi-person sites.  

Construction of the NBN is set to have commenced or be complete for around 3.3 million Australian homes and businesses by June 2016. More than 309,000 premises across Australia are connected to the NBN in 2014.

It is NBN Co’s goal to make all homes and businesses serviceable by 2020 with access to download data rates of at least 25 megabits per second. The majority of premises in the fixed line footprint will have access to download data rates of at least 50 megabits per second.

Where there are households and businesses already served by the Optus or Telstra HFC cable networks will receive fast broadband over an upgraded HFC network, they will continue to be connected using the HFC network.

Where the NBN fiber-to-the-premises (FTTP) network has been deployed or is in advanced stages of being built, that will be the connection approach.

Where the NBN fixed wireless or satellite networks are earmarked for deployment, that will continue to be the case.

In other cases, communities are likely to receive service using a fiber-to-the-node (FTTN) network, while  multi-dwelling units such as apartment blocks will get fiber-to-the-premises.

In other words, NBN will be using a “fiber to where we can recover our investment” approach.

Tuesday, December 16, 2014

T-Mobile US "Data Stash" Allows Roll Over of Unused Data Capacity

T-Mobile US will in January 2015 start allowing its customers to roll over unused data plan capability into the next month’s usage bucket, as part of a new program called “Data Stash.”


The plan is reminiscent of the Cingular “Roll Over” feature, that allowed mobile users to roll over unused voice minutes into their usage for the next month, with unused minutes of use expiring after a year.


Data Stash apparently will have the same “expire after 12 months” provision, and will be available to new and current T-Mobile US postpaid customers on a a postpaid “Simple Choice” plan who has purchased additional 4G LTE data, 3GB or more for smartphones and 1 GB or more for tablets.
In addition, T-Mobile US announced they will start every Data Stash with a whopping 10 GB of 4G LTE data – for free.

The move adds value, but doesn't directly lower data plan prices, a consideration that might be growing in importance as the U.S. mobile price war threatens gross revenues and profit margins for all the top four U.S. mobile service providers.

In the month since mid-November 2014, Verizon, AT&T, T-Mobile US and Sprint have lost about $45 billion in equity value as a result of growing perception that the marketing war now is hitting gross revenue and profit margins.

Where Will AI Prove an Existential Threat to Whole Industries?

Right now, we all speculate about the potential changes artificial intelligence might bring, as well. Predictions range from the existential...