Critiquing the Harvard broadband study

BerkmanOther voices are finding problems with the data at the core of the FCC’s broadband study, produced by Harvard’s Berkman Center for Internet & Society.

On Monday, George Ou of the Digital Society think tank systematically trashed the Harvard Berkman Study, concluding:

The underlying data cited by Berkman study is simply too flawed to be of any use. And because the study bases its conclusions on flawed data, the conclusions drawn in the Berkman broadband study are equally unreliable.

Bret Swanson (former executive editor of The Gilder Technology Report) writes:

the real purpose of the report is to make a single point: foreign “open access” broadband regulation, good; American broadband competition, bad.

The gaping, jaw-dropping irony of the report was its failure even to mention the chief outcome of America’s previous open-access regime: the telecom/tech crash of 2000-02. We tried this before. And it didn’t work!

There will be more criticism leveled at this report. Bottom line?

As we stated on page 24 of our report [ pdf, 944KB], in undertaking any international comparison, one must be cautious not to fix on any one measure regardless of whether it provides good or bad news. Much more can be learned by considering a range of indicators and most importantly, understanding and taking into account the underlying factors that influence the results.

Ignoring this understanding, too many are seduced by capturing easy headlines and fail to do their own scholarly analysis of the data.

9 thoughts on “Critiquing the Harvard broadband study”

  1. I found the Digital Society paper to be quite flawed and misleading. For example, they compare Figure 3.17, 'Average Advertised Speed', to actual download rates from Akamai. Unsurprisingly, the actual download rates are much lower, and there is a high degree of variance between the two measures.

    The Harvard study also looked at actual download speeds, using data from Speedtest.net. While this is mentioned in passing, misleading the casual reader, they do not compare the Speedtest.net data to Akamai, which would be a far more useful comparison. The Harvard study explicitly analyzes the relationship between average advertised and actual speed (Figure 3.18).

    Looking at the Speedtest.net data (Figure 3.19), we see that there is a far greater correlation between the Akamai data and the Harvard report than the authors of this critique admit.

    The Harvard study used all three of these indicators: highest speed offered by an incumbent, average speed offered, and average download speeds, among many others.

    I find it hard to believe the authors of this critique could make such a fundamental mistake in their analysis accidentally. This throws their entire analysis into extreme disrepute.

  2. Agreed.

    Also, while average speeds are very important, you can't discount advertised speeds altogether. They're indicative of different things.

    Advertised speeds, especially right now, are indicative of technological advancement. Advertised speeds of about 25-50Mpbs or more generally mean that a Cableco is using DOCSIS 3.0 or that a Telco has run FTTN. Average speeds usually reflect technological investment further within the network.

    So, really, looking at either to the exclusion of the other, as does the Digital Society blog post, is misleading. You have to take both into account, as did the Harvard study.

  3. Privacy – Agree (as my post indicates) that you shouldn't fix on any one indicator, but you need to get the underlying data right. Harvard used OECD advertised offer data – and that data was flawed. (e.g. Why did OECD ignore Quebec? Why does OECD introduce bias by over-sampling some countries and some offers within countries? etc.)

    Advertised speed (like any indicator) is only indicative of technological advancement if the data is collected correctly.

  4. Good point, Mark. The data should absolutely be accurate. Oversampling is a problem – one that the Harvard/FCC study tried to correct by limiting their methodology to the top 4 ISPs in any given market. This represents at least 80% of local market share in most surveyed countries (including Canada). The point there was not to avoid oversampling.

    Videotron's market share, for example, is not large enough in Canada to allow it to completely skew Canada's avg. advertised speeds.

  5. Actually, for "average advertised speed" it is a straight arithmetic average used by OECD, not weighted.

    Also, note the flawed assumption on Berkman page 113: "Firms compete in national markets." Actually, firms compete regionally – and some would say even down to the building level (such as Shaw / Novus battle).

    Berkman also looks at highest speed offered by an incumbent – which ignores Canadian experience that highest speeds have typically been offered by non-ILEC.

  6. Yes, good points. Its true that firms compete regionally and not nationally and that FCC and OECD did not use weighted math to get avg advertised speeds. The FCC instead narrowed the number of ISPs included in the calculation to top 4, which represents about 80% of the overall market.

    The Novus/Shaw battle is a good example of why this might be a good way to do things. Shaw, for example, started offering 'regional' predatory prices that were unreasonably low. These prices were only available to the very few buildings with potential Novus customers.

    When comparing national broadband capacity and price on an international scale, it would not make a lot of sense to include the capacity of a few apartment buildings in Vancouver.

    Again, with non-ILECs, there may be a few such as Novus that offer high speeds, but given broadband market entry costs expenses, these never reach very far in the market. Again, we get a few buildings here and there with FTTH, while Portugal get 1 gpbs connections all over. The incumbents in Canada dominate, and even among non-ILECs, the resellers have the larger market shares, and they're limited to ILEC capacities/tariffs.

    The bottom line is, most Canadians do not have access to the types of speeds we're talking about.

  7. You say tomatoe… 🙂

    Predatory and "unreasonably low prices" are terms that can get argued in the courts. Generally, consumers like price wars.

    To exclude these Novus/Shaw localized prices (or say, TELUS at Waterfront in Toronto), you would also need to exclude some of the European FTTH offerings – such as Dansk Brodband which has tiny share.

    It is complicated!

    Btw, with 50-100M services now offered by the major cablecos in the major urban centres (plus Aliant's FTTP work in New Brunswick), I think that most of our population either has, or will soon have access to the ultra speeds already.

    Keep in mind that the current wave of studies were generally frozen at Dec 2008, so we needed to back up to what was available then in order to benchmark.

  8. It certainly is complicated!

    When "unreasonably low" is lower than cost, it's usually pretty close to predatory, but either way, Shaw isn't offering prices that are sustainable on any level higher than building by building.

    I think ideally a study will account, as you say, for every small ISP by market share and reach in every country in the world. This is very difficult to do. The FCC study attempted to account for this by focusing on the top 4 providers in each country (excluding Novus <1%, Videotron <11% and others) to get some consistency. Dansk, which is not top 4, would also be excluded from their rankings as would TELUS' Toronto services.

    Certainly, there have been changes here in Canada and in the rest of the world in the past year. Canadian ISPs have introduced new services at 50-100 Mbps. Countries such as Protugal and Slovenia have introduced 1Gbps connections. So we would need to update surveys of advertised speeds internationally, not just in Canada, and see how we measure up.

  9. I just read through the CRTC's Communications Monitoring Report for 2009 to answer my own question above.

    In 2008, the incumbent telephone companies and cable providers in Canada have 94.5% of all residential high-speed internet subscribers. That's down a mere 1.5% in 4 years. The other 5.5% was distributed amongst all the 'resellers', utility telecoms, and all other providers combined.

    The evidence shows that, on the face of it, we do not have a healthy competition in Canada. At best, we have regional duopolies. At the end of the day, you can show customers have all the choice in the world, but they still aren't adopting it.

    In a healthy competitive market, you don't see such behaviour. You can't blame the consumers for not switching, or new competitors for not winning, as market forces would allow this to happen naturally.

    This is regulatory failure, plain and simple, in my opinion. The OECD and Harvard reports may or may not have methodological failings (I think the Harvard report was pretty good, myself). I'm afraid, Mark, that your analysis simply comes to illogical and demonstrably false conclusions.

Comments are closed.

Scroll to Top