Wednesday, November 12, 2008

Tim Lee's Twin Fallacies

[Edit: Tim replies. I reply.]

Cato has finally gotten around to publishing Tim Lee's article, "The Durable Internet: Preserving Network Neutrality without Regulation." I first saw a draft of his paper in March, and Tim engaged in a good spirited back-and-forth with me over email. The primary failings that I perceived then remain un-addressed in this final version. They are twofold:


1. The fallacy that any non-discrimination regulation is the same as the combined force of all misguided regulation since the advent of administrative agencies

The first problem with Lee's article is that it repeats one of the most common mistakes of certain libertarian sects: assuming that any government regulation is as bad as all government regulation. In Lee's case, the devilish regulation equated with network neutrality is the Interstate Commerce Act, the Civil Aeronautics Board, and the sum of all Federal Communications Commission regulation. This approach mirrors earlier claims by Bruce Owen, Larry Downes, and Adam Thierer, which I rebut here.

Lee begins by observing that "The language of the Interstate Commerce Act was strikingly similar to the network neutrality language being considered today." We should not be surprised that at least some of the non-discriminatory principles found in modern day neutrality proposals resemble those in the ICA. Indeed, net neutrality is inspired in part by elements of common carriage, which cross-pollinated into communications law in the 1910 Mann-Elkins Act (see pp. 21-23 of my thesis for more on this history). The gating question is whether or not the elements of the Interstate Commerce Commission that led to the inefficiencies that Lee claims are at all related to the non-disciminatory language that he claims connect the two. If and only if the answer is "yes," then a responsible analysis would consider whether or not the markets are relatively analogous, whether or not the administrative agencies tend toward the same failures, and whether the costs of regulation truly outweigh the benefits. In short, it is not enough to simply assert that net neutrality smells like the ICA, therefore it is doomed to fail.

I won't discuss the relationship to the Civil Aeronautics Board because I think the analogies are tenuous at best.

Finally, we arrive at the FCC discussion, which holds the most promise for actually being relevant. Unlike Bruce Owen, who inexplicably compares neutrality proposals to the AT&T antitrust proceedings, Lee seeks to equate neutrality with FCC rate-subsidization and market entry prohibitions. He concludes that, "like the ICC and the CAB, the FCC protected a client industry from the vagaries of markets and competition." Perhaps, but why is this similar to non-discrimination regulation?

A more accurate analogy with FCC rulemaking would be to compare neutrality to the non-disciminatory part of common carriage, the Computer Inquiries, Carterphone, or all three. Most scholars recognize that these rules allowed the discrimination-free operation of dial-up ISPs, and facilitated the explosion of the internet. The case of FCC non-discrimination mandates presents a stark counter-example to Lee's assertion of uniform regulatory failure.


2. The fallacy that there is an underlying "durability" of the technology/market structures of the internet that will successfully resist strong carrier incentives

Lee provides a somewhat novel argument when he claims that the internet has built in safeguards against welfare-harming practices like network discrimination. He begins by praising the effects of the "end-to-end" architecture of the internet, in which carriers simply deliver data and allow the "edges" of the network to determine what is sent and how. He thinks that this characteristic does not need to be backed up by regulators because the technology and the market will preserve it.

With respect to markets, his argument is twofold. First he claims that outright "blocking" of services would cause such backlash (from end-users or from content providers) that it would be untenable. Second, he claims that attempts to simply degrade service would not be terribly destructive in the short term, and would provide ample time to craft a regulatory response if necessary.

Lee justifies his customer backlash theory by pointing to cases such as the Verizon/NARAL dispute in which the company initially refused to give the non-profit an SMS "short code" but relented in the face of public outcry. In reality, the outcry came from inside-the-beltway advocates who threatened regulation, but in any event we have a more relevant example in the case of BitTorrent/Comcast, which he also discusses. The regulatory solution in this case is even more obvious, with the FCC ultimately issuing an order against the company (which is now on appeal). There is no evidence whatsoever that these resolutions were driven by users that have "had a taste of freedom" and have, "become acutely aware of any new restrictions," and, "stubbornly refuse efforts to impose them" -- resisting via technical or financial means. Nor is there evidence that, left alone, the markets would have settled on a non-discriminatory solution.

Lee tries to make the case that the technical structure of the internet would have allowed BitTorrent users to simply adopt better ways of hiding their traffic, and would have prevailed in that cat-and-mouse game. This is of course speculation, but it's also irrelavent. Whether or not highly technically savvy users can temporarily evade discrimination has little to do with how such practices would effect the activities of the majority of the population. In fact, we have strong examples to the contrary worldwide, as various regimes develop more and more sophisticated means for filtering their citizens' speech (such as the news today from Argentina). In those situations, there are often many people who can subvert the filters but the practice nevertheless fundamentally alters the nature of what is said, and what innovations flourish (see for example, the rollout and adoption of Google vs. Baidu in China).

Lee also lays out an argument for why the structure of the network itself makes it unlikely that last-mile carriers can successfully threaten blocking. He argues that because the core of the internet is highly interconnected, it would be practically impossible to discriminate against any particular site, and that those sites which are important enough to pay attention to could in turn threaten to stop serving customers from that carrier. In short, they need each other. In many cases this is true, although it doesn't necessarily mean that in all cases this relationship will be more attractive to the last-mile provider when compared to various exclusive relationships (or that even if it is, the provider will behave rationally). Things get even more dicey when we examine them from the perspective of second-tier sites or services, which have not yet achieved the "must have" status but nevertheless present revenue opportunities or competitive risk to the carriers.

Lee claims that even if this occurred, it would not be a real problem because it wouldn't be severe. "To be sure, such discrimination would be a headache for these firms, but a relatively small chance of being cut off from a minority of residential customers is unlikely to rank very high on an entrepreneur’s list of worries." His assumption that the chance of being cut off is "small" is belied by recent experience in the Comcast/BitTorrent case. The idea that one would be cut off only from a "minority of residential customers" is technically true because no one firm currently controls over 50% of residential connections, but there are some truly significant market shares that entrepreneurs would undoubtedly care about. Last-mile providers have duopoly over their subscribers, and a "terminating access" monopoly over current subscribers.

These problems are all made much more severe in an environment in which carriers practice partial discrimination rather than outright blocking. In our email back-and-forth, I told Lee that:

The notion that "D cant' degrade them all, because that would make D's Internet service completely useless" does not hold when you assume that D maintains a baseline level of connectivity (perhaps even at current levels of service) but only offers enhanced delivery to services/sites that pay up. Consumers don't see any change, but the the process of network-wide innovation gives way to source/application-based tiering. Imagine this starting in the era of dialup (you'd have to imagine away the last-mile common carrier safeguards in that scenario). Today I'd only get web-based video from ABC, Disney, etc.

The last-mile carrier "D" need not block site "A" or start charging everyone extra to access it, it need only degrade (or maintain current) quality of service to nascent A (read: Skype, YouTube, BitTorrent) to the point that it is less useable. This is neither a new limitation (from the consumers perspective) nor an explicit fee. If one a user suddenly lost all access to 90% of the internet, the last-mile carrier could not keep their business (or at least price). But, discrimination won't look like that. It will come in the form of improving video services for providers who pay. It will come in the form of slightly lower quality Skyping which feels ever worse as compared to CarrierCrystalClearIP. It will come in the form of [Insert New Application] that I never find out about because it couldn't function on the non-toll internet and the innovators couldn't pay up or were seen as competitors. As Barbara van Schewick observes, carriers have the incentive and ability to discriminate in this fashion.

Finally, Lee makes the argument that the current norm of "settlement-free" peering in the backbone of the internet will restrict last-mile providers' ability to discriminate and to create a two-tiered internet because they will be bound by the equal treatment terms of the agreements. This is not supported by practical evidence, given the fact that none of the push-back against existing discriminatory practices has come from network peers. It is also not supported by sound economic reasoning. It is certainly not in backbone-provider E's business interest to raise prices for all of its customers (an inevitable result). But, assuming E does negotiate for equal terms, the best-case scenario is that E becomes a more expensive "premium" backbone provider by paying monopoly rents to last-mile provider D, while F becomes a "budget" backbone provider by opting out (and hence attracts the "budget" customers).

We are already seeing cracks in the dam of settlement-free peering. The Cogent/L3 meltdown happened between two backbone-only providers and was in the context of volume-based disagreements. Two weeks ago, Sprint disconnected from Cogent because of a dispute over sharing. When you add the only recent pressure of last-mile leveraging and discrimination-based disagreements, these dynamics are troubling. Lee is making the case that history is on his side, but he doesn't have much supporting history to draw from. Common carriage prevented last-mile discrimination until 2005. Kevin Werbach, on the other hand, sees major risks from emerging market power, specialized peering, and what he calls possible "Tier 0" arrangements between vertically integrated providers. The Verizon/MCI/UUNET network was only recently unified, creating something close to this type of an arrangement.


Conclusion

Tim Lee's article repeats but then goes beyond the standard refrain of no-government-regulation libertarianism. However, his novel arguments for why the internet will take care of itself are not persuasive. Ultimately, we are left with his well-put argument for the benefits of network neutrality, but without any assurances that it will be preserved. Into this vacuum might flow reasonable discussion of how targeted government regulation might be the only means of achieving the ends we both seek.

No comments: