Title: Selling the Law: The Business of Public Access to Court Records
When: Thursday, February 5, 2009 - 4:30 PM
Where: Sherrerd Hall, Room 101
As government documents are increasingly digitized and put online, two orthogonal approaches to distributing these documents have developed. Under one approach, the documents are made easily and freely accessible. In others, the government retains or introduces barriers to access that are inspired by traditional physical access. When these barriers are fee-based, the government can inadvertently create downstream monopolies or architectures of control over public information. This problem is especially severe in the case of federal district court documents, which are available only via an outdated, fee-based, court-run system or from expensive aggregators like Lexis or Westlaw. Indeed, evidence indicates that the courts are using public access fees to subsidize other activities. If we are to be a nation of laws, citizens must have access to the law. The upfront cost of making court documents freely available is far outweighed by the long-term benefits to society. Widespread digitization combined with Internet connectivity has placed these benefits within reach. The courts must now address the task of revamping outmoded policies and funding structures in order to align their practice with this reality.
Friday, December 19, 2008
Talk at Princeton CITP, Feb 5th
Tuesday, December 16, 2008
Radio Berkman: A (Porn) Free Nationwide Internet?
A scheduled FCC vote on a free nationwide wireless internet, was derailed this week after outcry from both the Bush administration, the ACLU, Congressional Democrats, and the digerati. What was it about the FCC’s proposal that raised the eyebrows of such a diverse group of opponents? David Weinberger interviews Stephen Schultze of the Berkman Center to find out more.
Listen here.
Monday, December 15, 2008
WSJ on Google and Net Neutrality - DEVELOPING
I have done the only sensible thing and put up a Drudge siren. It's appropriate given the level of research and care that went into today's Wall Street Journal article claiming, "Google Inc. has approached major cable and phone companies that carry Internet traffic with a proposal to create a fast lane for its own content, according to documents reviewed by The Wall Street Journal."
Suffice to say, the authors got it fundamentally wrong. They failed to understand basic networking concepts like colocation versus discrimination. Richard Whitt (full disclosure: my old boss and co-author of a forthcoming paper) wrote a charitable but biting reply. The best summary I've seen so far is actually this compendium of quotes: OMG! WSJ net-neutrality own-goal....
I'll keep updating this post as the brawl unfolds. Suffice to say, if you're looking for evidence of the mainstream press under-performing compared to the blog-o-sphere-o-pedia-space... look no further. WSJ has become Drudge, and the blogs are actually getting the story right.
And with that, all I can say is DEVELOPING...
- At 12:49pm, the WSJ posts "Discussing Net Neutrality" which notes, "Today’s Journal story on Google's plans to develop a fast track for its own content has certainly gotten a rise out of the blogosphere." Commenters, including Dan Gillmor, ask them why they aren't retracting or correcting the story.
- At 4:23pm, another WSJ post, "What's Edge Caching?," pulls quotes from blogs describing edge caching, generally making the case that although it is a common and well-known practice, this case is different.
Cheeto-stained keyboards all over the country were burning up this morning after The Wall Street Journal reported that President-elect Obama was flip-flopping on his pro-net-neutrality position and Google was in secret talks to buy preferential treatment for their content from service providers. But as it turns out, WSJ were just ObamaOpposesNetNeutralityRolling us.
And the surge of criticism:
- Lessig Blog: The Made Up Dramas of the Wall Street Journal
- Tim Wu Blog: Google Wall Street Journal - They haven’t got the goods
- TPM: Obama Spokesperson: His Commitment To Net Neutrality Hasn't Wavered One Bit
- Ars: Google backing off net neutrality with ISP deal? Not really
- ZDNet: Media used by cable to create Google scandal
- Wired Blog: WSJ WTF?
- Broadband Reports: The Wall Street Journal's Google Hatchet Job - Opinion: paper helps cable, telcos smear their biggest enemy...
- Scott Bradner: Google as evil, now from The Wall Street Journal: WSJ ends year showing a misunderstanding of technology
- Timothy B. Lee: The Journal Misunderstands Content-Delivery Networks
- Scott Rosenberg: Journal steps in Net neutrality hornet’s nest
- Conde Nast: Google Slams 'Confused' WSJ Story on Network Neutrality
- Wired: Google Blasts WSJ, Says it's Still 'Committed to Network Neutrality'
- PC World: WSJ Accuses Google of Abandoning Net Neutrality: Reality Check
- Reuters: Google says plan would not threaten net neutrality
- Harold Feld: The Google Non-Story On Network Neutrality
- Huffington Post, Tim Karr (of Free Press): WSJ Gets It Wrong. Net Neutrality Still in the Front Seat.
- ZDNet: Google turns on net neutrality (not!)
- IDG: Google, Microsoft Say They Still Support Net Neutrality
- MediaPost: Net Neutrality Advocates Rally To Google's Defense
- Siva Vaidhyanathan: Is Google giving up on Net Neutrality? Hardly.
- Center for Democracy and Technology: Neutrality and Caching
- Public Knowledge: Comment on Wall Street Journal ‘Net Neutrality’ Story
Sunday, December 14, 2008
AWS-3 Vote Postponed Indefinitely
Martin has come under increasing pressure from all sides. The ACLU criticized the "family friendly" aspects of the plan, in chorus with comments from public interest groups. Then, the Bush Administration sent a letter to the FCC last Wednesday, stating that "the draft AWS-3 order would constrain a provider's usage of this spectrum, favoring a particular business model and potentially precluding the spectrum from allocation to the most valuable use" (coverage here). Nevertheless, Martin appeared determined to see the plan through, and issued the formal agenda the next day.
But on Friday, Congressmen Rockefeller and Waxman weighed in with a letter. These are the two guys who will head up the committees that oversee the FCC, in the Senate and House respectively. Apparently this pushed Martin over the edge, and he canceled the meeting altogether. FCC Spokesman Robert Kenny said:
"We received the letter from Senator Rockefeller and Congressman Waxman today and spoke with other offices. In light of the letter, it does not appear that there is consensus to move forward and the agenda meeting has been canceled."
Wow. This means that the question of what to do with the AWS-3 spectrum will almost certainly fall to the next FCC. They could start the process over from scratch, with new proposals for what to do with the spectrum and another series of notice-and-comment periods. Hopefully that Commission will take an approach that does not present such significant First Amendment problems. The failure of this ill-designed proposal is a bittersweet victory -- at least we didn't get bad rules out of the process. However, we have also potentially delayed the point at which this spectrum can be used to overcome our national broadband woes.
More coverage:
Friday, December 12, 2008
E-Government Reauthorization Might Not Pass
OMB officials and Senate Homeland Security and Governmental Affairs Chairman Joseph Lieberman and ranking member Susan Collins have battled behind the scenes in recent months to reauthorize the E-Government Act of 2002 before President Bush leaves office, but a standoff in the Judiciary Committee has probably killed the bill, sources said Wednesday.
I agree with the Sunlight blog post by Wonderlich. This is a real bummer. Although the E-Government Act doesn't solve some of the fundamental problems of public access to government information (which I discussed in my recent Berkman lecture) it would do a few things to improve the situation.
Wednesday, December 3, 2008
The AWS-3 Plot Thickens
Martin is trying to sweeten the deal for his AWS-3 spectrum auction proposal by adding a "use it or lose it" provision. If the winner of the auction does not build out their no-fee wireless internet network to all areas within 5 years, it will lose its license in the non-covered areas. Those areas will then apparently revert to an unlicensed regime. The WSJ article and Reuters articles don't give much detail, but it's clear that the Chairman is doing some strategic leaking.
WASHINGTON -- Federal Communications Commission Chairman Kevin Martin is proposing giving innovators free unlicensed access to valuable airwaves if the company that buys a license to the channels doesn't meet tough requirements to build a nationwide Internet network.
The proposal has been added to a pending auction of the airwaves. The FCC is scheduled to vote on rules for the sale on Dec. 18. Mr. Martin wants the company that buys the airwaves to devote at least 25% of the spectrum to free Internet access for 95% of the country. The no-cost Internet service also would be smut-free for users under 18. Adult users could opt out of the filter blocking pornographic content.
Mr. Martin said Wednesday that he has circulated two versions of the auction item -- one with the unlicensed provision and one without -- for the other commissioners on the five-member body to review before the meeting. The FCC will vote on only one version, depending on which version the other commissioners prefer, Mr. Martin said.
Mr. Martin wants to sell a nationwide license to the airwaves rather than give the channels to entrepreneurs because he wants to promote free Internet access. By adding a clause that would give away airwaves where there isn't an Internet network after five years, Mr. Martin hopes that the owner of the channels would have an added incentive to build a network.
Mr. Martin said Wednesday that both versions of the auction item include a "use it or lose it" provision in which the owner of the channels would lose spectrum where there is no Internet access. The owner of the channels would "continue to serve whatever area they've built out," he said.
Martin also recently leaked the fact that he is proposing that adults can verify their identity to avoid the porn filter initially mandated for all users of of the no-fee service. I helped author some comments to the FCC explaining why this filter was a bad idea, so an opt-out mechanism could theoretically be a good development... if age verification were viable, and if you thought that adults were eager to identify themselves as possible porn-lovers, and if we assumed that all adults had credit cards. In short, filtering is not a great option even with those caveats.
It all gets decided on the 18th. You can read the latest comments.
Wednesday, November 19, 2008
FCC Releases White Spaces Order
Friday, November 14, 2008
Congrats Susan and Kevin
Susan was a member of my thesis committee, and Kevin has been tremendously influential in my thinking. I've never told Kevin, but in addition to informing my writing as recently as this week, I learned how to make web sites in 1995 using his Bare Bones Guide to HTML.
The Commission could not be under better transitional guidance.
Tim Lee's Reasonable Retorts
1. Net Neutrality proponents don't clearly state what they are seeking to prevent, and thus evade any attempt to disprove their harms.
I’ve found that any time I take one of these ISP strategies seriously and put forth an argument about why it’s unlikely to be feasible or profitable, the response from supporters of regulation is often to concede that the particular scenario I’ve chosen is not realistic...
Lee goes on to list several scenarios, all of which are possible to varying degrees. However, they all fit the simple rubric of network discrimination and they all are harmful. In general, subtle discrimination is more likely than outright blocking. This is something that has been clearly articulated by the mainstream of neutrality proponents for some time. That is why I included reference to Barbara van Schewick's paper. If Tim were choosing to "take one of these ISP strategies seriously" he would have done well to focus on the one that most people are talking about.
2. This type of discrimination is unlikely, isn't that bad, and we can always fix it after the fact.
First, notice that the kind of discrimination he’s describing here is much more modest than the scenarios commonly described by network neutrality activists. Under the scenario he’s describing, all current Internet applications will continue to work for the foreseeable future, and any new Internet applications that can work with current levels of bandwidth will work just fine. If this is how things are going to play out, we’ll have plenty of time to debate what to do about it after the fact.
I am describing a mainstream version of discrimination, which can happen either right now or going forward as operators upgrade their networks but keep non-payers in the slow lane. We have ample evidence of the former in Comcast/BitTorrent. The latter is simply a less visible version of the former -- an even further degree away from Lee's scenario in which consumers have "a taste of freedom", "become acutely aware of any new restrictions," and, "stubbornly refuse efforts to impose them." The fact that carriers are building out faster networks doesn't tell us whether or not this is likely. Carriers will of course build out faster networks, because they typically profit more from them (whether they impose discrimination or not). Given the current uncertain regulatory climate, it is no surprise that they have refrained from additional large-scale discrimination. This climate, however, is temporary. The relevant question is whether or not those network upgrades provide additional shield from the customer backlash that Lee posits. It is clear that they do.
How bad you think this discrimination is depends on how seriously you take arguments about platform economies, dynamic innovation, network effects, and freedom of speech. It also depends on whether or not you think that degrading service achieves most of the ends of outright blocking. I argue that it does. Google's obsession with page load times is not simply because they are hyper-focused engineers. Skype's need for equal network treatment is not just because they want calls to sound nice. The BitTorrent protocol's expectation that connections are not randomly reset is not a matter of convenience.
Lee would have us believe that we will always have the space to regulate these issues, if needed, after the fact. The Comcast order might give us some hope in this regard, except for the tremendous murkiness that surrounds the decision, its implications, and its legal durability. Regulation from the FCC can be roughly thought to fall into two categories: rulemaking and adjudication. Rulemaking explicitly sets out the detailed requirements, whereas adjudication defines basic guidelines and then builds policy through case-by-case enforcement. Lee clearly opposes rulemaking on its face. We are left with adjudication, but in this case he opposes further definition of enforceable principles. This is not ex post regulation, it is no regulation at all.
3. The risks are overblown, and disproved by history.
It’s worth remembering that alarmism about the future of the Web is almost as old as the Web itself.
Lee is not a fan of Lessig's "apocalyptic" predictions in 1999. While Lessig's forecasts undoubtedly have not fully come true ("yet" -- as he notes in the preface to the new edition), we have unquestionably seen some of those trends play out. Increasing control by intermediaries, domestically and abroad, threatens speech and innovation. The "open access" battle that was heating up at that time was not lost until 2005, and since then we have case studies for how the stopgap quasai-neutrality principles are strained. But, I'm not here to defend Lessig (I certainly disagree strongly with him at times).
Rather than debating generally whether past predictions of others have come true, it is more productive to examine the specific issues at hand with the most relevant data points from history and the present. We know that historically corporations tended toward building closed systems like AOL and CompuServe. We know that well-crafted regulatory interventions like common carrier non-discrimination, Computer II, and Carterphone unleashed waves of innovation. We know that carriers today have pursued discriminatory practices and been partially disciplined by somewhat ambiguous regulation. We know that abroad, discriminatory practices have flourished in environments in which intermediaries exercise the most control. We know that domestically in the parallel (and increasingly overlapping) wireless market, market actors impose restrictions that radically limit innovation.
This is not a strong historical or factual case against the need for, or success of, non-discrimination regulation.
4. Steve misunderstands settlement-free peering.
“Settlement-free” means that no money exchanges hands. If D and E are peers [this example assumes that D is a "last mile" backbone provider like Verizon and E and F are competitive "tier 1" providers such as Level 3 or Global Crossing], that by definition means that E pays D nothing to carry its traffic, and vice versa.
This technical/wonky definition is at the heart of what I consider Lee's most original, but nevertheless misguided, argument. The basic idea he posits is that because a certain set of backbone providers traditionally negotiate no-fee interconnection agreements, there is no ability for last-mile providers to leverage their power in the consumer market into the backbone market.
Let's go back and define a couple of key terms. First, "settlement-free peering" means, as Lee accurately describes, an arrangement between two providers in which they do not exchange money but simply agree to carry each others' traffic. They do so under detailed and confidential interconnection agreements that define the terms of this agreement, including things like jitter, latency, throughput, etc. These agreements often require equal treatment by both parties (although they may not speak to those providers' relationships with other providers). Let's assume for the sake of argument that they always do require equal treatment between the two. The types of companies that have these agreements are "Tier 1" backbone providers at the core of the internet -- Level 3, Sprint, AT&T, etc.
Second, "transit" agreements are contractual relationships between unequals. In this case, one party typically pays the other for carrying its traffic under various terms. This is the type of relationship that Comcast has with the Tier 1 providers. For example, here is an excerpt of the traceroute from my Comcast cable modem to google.com:
7 pos-0-3-0-0-cr01.chicago.il.ibone.comcast.net (68.86.90.57)
8 xe-10-1-0.edge1.newyork2.level3.net (4.78.169.45)
9 ae-2-79.edge1.newyork1.level3.net (4.68.16.78)
10 google-inc.edge1.newyork1.level3.net (4.71.172.86)
See that? My packets go from Comcast -> L3 -> Google. Comcast pays Level 3 to transmit their packets, according to some confidential terms that it agrees to. Comcast has a rather large national network (although it is not a "Tier 1" provider) and thus can route its packets around to locations where it has the best bargaining power with the party at the exchange point (in this case, they sent my packets from Boston to Chicago before plugging into L3). Lee's theory is that the settlement-free peering agreements probably don't allow discrimination based on content or source, and he seems to assume that downstream transit agreements are implicated in this obligation because at some point they must interconnect with those backbone providers. Furthermore, he claims that both parties need each other enough that nobody would ever violate these principles.
In my initial critique, I gave several reasons to doubt this claim. First, there is no practical evidence that Tier 1 providers have pressured their downstream transit peers to remain non-discriminatory. This has not been a factor in discrimination disputes that we have seen to date, like Comcast/BitTorrent or Madison River (instead, regulatory threats have brought players in line). Second, there is ample reason to believe that Tier 1 providers would indeed be willing to de-peer despite Lee's assertion that they simply need each other too much (thus I cite the Cogent/L3 dispute as well as the Cogent/Sprint de-peering from a couple of weeks ago). Third, the universe of settlement-free peering is increasingly giving way to varieties of transit agreements in which concessions are made in exchange for payment. Fourth, there are now emerging unified backbone/last-mile networks for which much of the traffic need not pass through a Tier-1 exchange point at all (eg. Verizon/MCI/UUNET). Settlement-free peering has been a powerful norm in keeping content or source-based discrimination out of the core of the network, but even there their strength is waning.
Wednesday, November 12, 2008
Tim Lee's Twin Fallacies
Cato has finally gotten around to publishing Tim Lee's article, "The Durable Internet: Preserving Network Neutrality without Regulation." I first saw a draft of his paper in March, and Tim engaged in a good spirited back-and-forth with me over email. The primary failings that I perceived then remain un-addressed in this final version. They are twofold:
1. The fallacy that any non-discrimination regulation is the same as the combined force of all misguided regulation since the advent of administrative agencies
The first problem with Lee's article is that it repeats one of the most common mistakes of certain libertarian sects: assuming that any government regulation is as bad as all government regulation. In Lee's case, the devilish regulation equated with network neutrality is the Interstate Commerce Act, the Civil Aeronautics Board, and the sum of all Federal Communications Commission regulation. This approach mirrors earlier claims by Bruce Owen, Larry Downes, and Adam Thierer, which I rebut here.
Lee begins by observing that "The language of the Interstate Commerce Act was strikingly similar to the network neutrality language being considered today." We should not be surprised that at least some of the non-discriminatory principles found in modern day neutrality proposals resemble those in the ICA. Indeed, net neutrality is inspired in part by elements of common carriage, which cross-pollinated into communications law in the 1910 Mann-Elkins Act (see pp. 21-23 of my thesis for more on this history). The gating question is whether or not the elements of the Interstate Commerce Commission that led to the inefficiencies that Lee claims are at all related to the non-disciminatory language that he claims connect the two. If and only if the answer is "yes," then a responsible analysis would consider whether or not the markets are relatively analogous, whether or not the administrative agencies tend toward the same failures, and whether the costs of regulation truly outweigh the benefits. In short, it is not enough to simply assert that net neutrality smells like the ICA, therefore it is doomed to fail.
I won't discuss the relationship to the Civil Aeronautics Board because I think the analogies are tenuous at best.
Finally, we arrive at the FCC discussion, which holds the most promise for actually being relevant. Unlike Bruce Owen, who inexplicably compares neutrality proposals to the AT&T antitrust proceedings, Lee seeks to equate neutrality with FCC rate-subsidization and market entry prohibitions. He concludes that, "like the ICC and the CAB, the FCC protected a client industry from the vagaries of markets and competition." Perhaps, but why is this similar to non-discrimination regulation?
A more accurate analogy with FCC rulemaking would be to compare neutrality to the non-disciminatory part of common carriage, the Computer Inquiries, Carterphone, or all three. Most scholars recognize that these rules allowed the discrimination-free operation of dial-up ISPs, and facilitated the explosion of the internet. The case of FCC non-discrimination mandates presents a stark counter-example to Lee's assertion of uniform regulatory failure.
2. The fallacy that there is an underlying "durability" of the technology/market structures of the internet that will successfully resist strong carrier incentives
Lee provides a somewhat novel argument when he claims that the internet has built in safeguards against welfare-harming practices like network discrimination. He begins by praising the effects of the "end-to-end" architecture of the internet, in which carriers simply deliver data and allow the "edges" of the network to determine what is sent and how. He thinks that this characteristic does not need to be backed up by regulators because the technology and the market will preserve it.
With respect to markets, his argument is twofold. First he claims that outright "blocking" of services would cause such backlash (from end-users or from content providers) that it would be untenable. Second, he claims that attempts to simply degrade service would not be terribly destructive in the short term, and would provide ample time to craft a regulatory response if necessary.
Lee justifies his customer backlash theory by pointing to cases such as the Verizon/NARAL dispute in which the company initially refused to give the non-profit an SMS "short code" but relented in the face of public outcry. In reality, the outcry came from inside-the-beltway advocates who threatened regulation, but in any event we have a more relevant example in the case of BitTorrent/Comcast, which he also discusses. The regulatory solution in this case is even more obvious, with the FCC ultimately issuing an order against the company (which is now on appeal). There is no evidence whatsoever that these resolutions were driven by users that have "had a taste of freedom" and have, "become acutely aware of any new restrictions," and, "stubbornly refuse efforts to impose them" -- resisting via technical or financial means. Nor is there evidence that, left alone, the markets would have settled on a non-discriminatory solution.
Lee tries to make the case that the technical structure of the internet would have allowed BitTorrent users to simply adopt better ways of hiding their traffic, and would have prevailed in that cat-and-mouse game. This is of course speculation, but it's also irrelavent. Whether or not highly technically savvy users can temporarily evade discrimination has little to do with how such practices would effect the activities of the majority of the population. In fact, we have strong examples to the contrary worldwide, as various regimes develop more and more sophisticated means for filtering their citizens' speech (such as the news today from Argentina). In those situations, there are often many people who can subvert the filters but the practice nevertheless fundamentally alters the nature of what is said, and what innovations flourish (see for example, the rollout and adoption of Google vs. Baidu in China).
Lee also lays out an argument for why the structure of the network itself makes it unlikely that last-mile carriers can successfully threaten blocking. He argues that because the core of the internet is highly interconnected, it would be practically impossible to discriminate against any particular site, and that those sites which are important enough to pay attention to could in turn threaten to stop serving customers from that carrier. In short, they need each other. In many cases this is true, although it doesn't necessarily mean that in all cases this relationship will be more attractive to the last-mile provider when compared to various exclusive relationships (or that even if it is, the provider will behave rationally). Things get even more dicey when we examine them from the perspective of second-tier sites or services, which have not yet achieved the "must have" status but nevertheless present revenue opportunities or competitive risk to the carriers.
Lee claims that even if this occurred, it would not be a real problem because it wouldn't be severe. "To be sure, such discrimination would be a headache for these firms, but a relatively small chance of being cut off from a minority of residential customers is unlikely to rank very high on an entrepreneur’s list of worries." His assumption that the chance of being cut off is "small" is belied by recent experience in the Comcast/BitTorrent case. The idea that one would be cut off only from a "minority of residential customers" is technically true because no one firm currently controls over 50% of residential connections, but there are some truly significant market shares that entrepreneurs would undoubtedly care about. Last-mile providers have duopoly over their subscribers, and a "terminating access" monopoly over current subscribers.
These problems are all made much more severe in an environment in which carriers practice partial discrimination rather than outright blocking. In our email back-and-forth, I told Lee that:
The notion that "D cant' degrade them all, because that would make D's Internet service completely useless" does not hold when you assume that D maintains a baseline level of connectivity (perhaps even at current levels of service) but only offers enhanced delivery to services/sites that pay up. Consumers don't see any change, but the the process of network-wide innovation gives way to source/application-based tiering. Imagine this starting in the era of dialup (you'd have to imagine away the last-mile common carrier safeguards in that scenario). Today I'd only get web-based video from ABC, Disney, etc.
The last-mile carrier "D" need not block site "A" or start charging everyone extra to access it, it need only degrade (or maintain current) quality of service to nascent A (read: Skype, YouTube, BitTorrent) to the point that it is less useable. This is neither a new limitation (from the consumers perspective) nor an explicit fee. If one a user suddenly lost all access to 90% of the internet, the last-mile carrier could not keep their business (or at least price). But, discrimination won't look like that. It will come in the form of improving video services for providers who pay. It will come in the form of slightly lower quality Skyping which feels ever worse as compared to CarrierCrystalClearIP. It will come in the form of [Insert New Application] that I never find out about because it couldn't function on the non-toll internet and the innovators couldn't pay up or were seen as competitors. As Barbara van Schewick observes, carriers have the incentive and ability to discriminate in this fashion.
Finally, Lee makes the argument that the current norm of "settlement-free" peering in the backbone of the internet will restrict last-mile providers' ability to discriminate and to create a two-tiered internet because they will be bound by the equal treatment terms of the agreements. This is not supported by practical evidence, given the fact that none of the push-back against existing discriminatory practices has come from network peers. It is also not supported by sound economic reasoning. It is certainly not in backbone-provider E's business interest to raise prices for all of its customers (an inevitable result). But, assuming E does negotiate for equal terms, the best-case scenario is that E becomes a more expensive "premium" backbone provider by paying monopoly rents to last-mile provider D, while F becomes a "budget" backbone provider by opting out (and hence attracts the "budget" customers).
We are already seeing cracks in the dam of settlement-free peering. The Cogent/L3 meltdown happened between two backbone-only providers and was in the context of volume-based disagreements. Two weeks ago, Sprint disconnected from Cogent because of a dispute over sharing. When you add the only recent pressure of last-mile leveraging and discrimination-based disagreements, these dynamics are troubling. Lee is making the case that history is on his side, but he doesn't have much supporting history to draw from. Common carriage prevented last-mile discrimination until 2005. Kevin Werbach, on the other hand, sees major risks from emerging market power, specialized peering, and what he calls possible "Tier 0" arrangements between vertically integrated providers. The Verizon/MCI/UUNET network was only recently unified, creating something close to this type of an arrangement.
Conclusion
Tim Lee's article repeats but then goes beyond the standard refrain of no-government-regulation libertarianism. However, his novel arguments for why the internet will take care of itself are not persuasive. Ultimately, we are left with his well-put argument for the benefits of network neutrality, but without any assurances that it will be preserved. Into this vacuum might flow reasonable discussion of how targeted government regulation might be the only means of achieving the ends we both seek.
Wednesday, November 5, 2008
A Good Day for Openness
First, we elected a president dedicated to government transparency and accessibility. I hope that Obama's "Google for Government" bill is a harbinger of things to come in his administration. Making more information freely available and searchable will allow the better functioning of our government.
Second, a slightly more wonky development. The FCC approved unlicensed use of the "white spaces." This is the culmination of a 4+ year-long process, with heavy lobbying in the past year or so. It opens up huge swaths of spectrum, which any citizen or innovator can put to use for things like wireless broadband.
Third, a geeky development. Somebody rooted the G1 -- the first handset based on the open-source Android operating system. Although the operating system itself is open-source, T-Mobile had locked down all of the interesting stuff. Now that it's unlocked, we will likely see a plethora of interesting development on the platform.
Monday, October 27, 2008
White Spaces and Red Herrings
The FCC is set to decide what to do with the vast, unused swaths of spectrum between television channels in its open meeting on Election Day. When the rest of the country is paying attention to an historic contest for the ultimate game of King of the Hill, the Commission will be deciding how we are to share (or hoard) one of our most unappreciated public resources.
Back in ancient times, you would turn your television dial through channels received over "bunny ears" and wonder why so many of them showed static. Since then, most of us have transitioned to cable or satellite television, and few of us have noticed that these occasional flurries have turned into a blizzard. In fact, most stations are currently broadcasting in both analog and digital, but in February of 2009 they will have to turn off their analog transmissions (thus doubling the unused space).
What if we could instead use those channels to watch YouTube videos of cats being vacuumed? The future is now. On November 4th, the FCC will be deciding whether or not wifi-like devices can make use of the spaces between television broadcasts. They have spent more than four years investigating this question, and from all accounts they appear to be poised to say "yes." This is where Dolly Parton, Ozzy Osbourne, and Rick Warren come in.
See, all of those folks rely on wireless microphones that already use broadcast television channels when nobody is using them for TV. There are actually a few people who are licensed to do this, but the reality is that none of those I've listed are acting legally. Perhaps we would expect this from bat-biting Ozzy, or perhaps even Dolly... but Rick? How scandalous.
All three of these constituencies have filed comments at the FCC opposing innovative new uses of valuable spectrum. They do so from their understandably biased perspectives. Each wants to preserve their ability to use wireless microphones as they have become accustomed -- in Dolly Parton's case, it is essential that she maintain the integrity of audio fidelity in her live performances of the musical adaptation of "9 to 5."
Don't get me wrong. That was one of my favorite movies as a child, and I have a fondness for her music (it is the saving grace of the entire "country" genre which no longer resembles its roots in the least). However, I disagree with Dolly on this matter. Perhaps I can proceed by familiar analogy.
In "9 to 5" Dolly Parton and her clever co-conspirators exhibited innovation and flexibility, standing up to the incumbents of old. Is it too much of a stretch to claim that legacy broadcasters are similarly generating system-wide inefficiencies through their opposition to flexible use of the spectrum? Probably. But nevertheless, I think that Dolly is on the wrong side of this one. It's time to open the airwaves (if only in the limited fashion proposed). Even the houses of worship are going to have to deal with the fact that they have been misled by representatives from the microphone companies who never bothered to tell them that they were advising them to break the law.
Representative Dingell, Chairman of the House Committee on Energy and Commerce (and ultimate overseer of the FCC, along with the parallel Senate committee), recently wrote a letter to the Commission raising a couple of valid but uninformed questions about white spaces proposals. He asked first why the Commission hadn't considered a licensed approach to the frequencies. Of course, this debate was well-trod long ago. The licensed approach is the instantiation of a classic "Coasian" perspective, which has been debated since the beginning of time (or, at least, the 1950's). The FCC itself convened a task force in 2002 which concluded that:
No single regulatory model should be applied to all spectrum: the Commission should pursue a balanced spectrum policy that includes both the granting of exclusive spectrum usage rights through market-based mechanisms and creating open access to spectrum "commons"...
That whole "commons" thing sounds a bit communist (or, gasp, socialist!) but in fact reasonable sharing of a shared resource makes quite a bit of sense. Despite the full force of the FCC's engineering conclusions to the contrary, the mega-churches assert that "Today, there is no reliable technology that can protect existing services from what would be crippling interference from new portable devices," Ozzy's sound engineer urges that, "The FCC must take steps to insure that catastrophic interference does not occur," and Dolly explains that:
I don't know all the legalese concerning the issue so I've had some very smart people inform me about the legalities here. [...] I have deep concern over the Commission's announcement that it intends to vote on an order allowing devices using spectrum sensing technoogy to occupy the "white space" radio frequencies on November 4, 2008 (Election Day). [...] As you may know, I am an inductee to both the Country Music and Songwriters Hall of Fame and am currently on a world tour supporting my latest album. New regulations could have direct impact on many ventures in which I am directly involved, including: 9 TO 5: THE MUSICAL [...] Dollywood [...] Grande Ole Opry
Dingell's second argument is that the FCC's technical findings may not have been peer reviewed. This point appears to hinge on a much-debated statute that tasks the OMB with "ensuring and maximizing the quality, objectivity, utility, and integrity of information (including statistical information) disseminated by Federal agencies." The OMB's rules are even more debated, and have been viewed by many as an excuse for the powers that be to kill off reports that it disapproves of. Indeed, there was an earlier peer-review of studies related to the white spaces [update: and a similar peer-review of this round of testing has now been posted], and the recent AWS-3 technical study did not undergo a similar process.
Could it be that these objections are merely a red herring?
In other news, the ranking member of Dingell's parallel committee in the Senate, a vocal opponent of open internet access, was today found guilty of multiple felony charges related to his unethical ties to an industry he oversaw.
Wednesday, October 15, 2008
White Spaces Moving Forward
- Susan Crawford's Post
- WSJ Article
- Washington Post summary of Chairman Martin's comments
- Google Public Policy Blog on the issue
- FCC Schedule for the Nov 4th Open Meeting
- The technical report from the FCC's Office of Engineering and Technology
- Comm Daily article "Activists Celebrate Wireless Gains, Relish Prospect of Obama Presidency"
- Drew Clark's article "Broadcast Networks Seek ‘Time Out’ on FCC Push for White Spaces" and my reply
- more coming...
Open Access to Government Documents ...or, "Federal Court Documents: Even Google Can't Find Them"
In the past twenty years, a remarkable number of government documents have been put online. In some cases, these documents are made easily and freely accessible. In others, technology has failed to overcome barriers or even created new barriers to access. One particular subset of documents -- opinions, dockets, and the full public record in federal court cases -- remain behind a pay wall. Although the U.S. Government cannot hold copyright in documents it creates, it has for a long time long charged for the cost of creating and maintaining these documents. While the courts understandably seek to pay for the services they provide, this talk will argue that there is an alternative path in which the public benefits far outweigh the costs. Stephen Schultze makes a dynamic case for free access to government documents, in honor of Open Access Day 2008.
Also, check out this great essay by James Grimmelmann, discussing the Oregon statutes battle, as well as his recent lecture on the issues more generally.
Also, the European Transparency Initiative was mentioned during my talk, as well as this Dutch report.
Saturday, September 6, 2008
New Chapter in Scientology v. Anonymous on YouTube
Over the last couple of days, unwarranted takedowns were executed by YouTube en masse for Anonymous videos, followed by reinstatement in most cases. EFF has a summary, and you can follow the Anonymous forums if you have the patience and stomach. It is to YouTube's credit that they appear to have restored videos/accounts in most cases, but it is also troubling that such a major deletion of speech could happen in the first place.
Thursday, August 28, 2008
My Masters Thesis...
The Business of Broadband and the Public Interest:
Media Policy for the Network Society
[Update: Adam Thierer posted a response. My critiques of his past (and current) positions can be found at pages 8 and 72 of my thesis.]
Saturday, August 2, 2008
A Reply to Nachbar, Part 2
===
The Commission’s final order struck a compromise on many of the issues at stake.[1] The “open devices”/”open applications” rules were adopted, while the more aggressive openness proposals were not incorporated. When bidding concluded months later, it was revealed that Google had indeed bid up to the reserve price on the “C” block but that Verizon had cast the winning bid. Telecommunications analyst Blair Levin quipped that because of the openness conditions on Verizon, “Google is the happy loser.”[2] Of course Google had a clear business interest in maintaining users ability to access their services. Likewise, the wireless carriers perceived a business interest in retaining the ability to control what users could do on their networks. While the news reports were dominated by analysis of which big company really “won,” many missed the more fundamental public interest issues at stake.
The openness conditions in many ways mirror traditional non-discrimination public interest safeguards. The conditions seek to preserve the freedom of users to use the network as they choose, and to access it with any device that did not cause harm to the network. The former resembles a weaker form of Computer Inquiries application non-discrimination, and the latter condition mirrors the Carterphone decision of 1968.[3] There are many potential loopholes in the rules. Indeed, no sooner had the rules been decided than Verizon began lobbying for a weak interpretation and Google began counter-lobbying.[4] I described earlier why I think that the combination of common carriage, the Computer Inquiries, and Carterphone were necessary for an environment that fostered the flourishing of early consumer internet access. In the wireless context, I believe that similar flexibility of use is essential to maintaining historical non-discriminatory access in this new medium, as well as preserving the internet ethos that has led to innovation and free speech online.
Thomas Nachbar believes that defining this use neutrality is too difficult, that regulators will tend to define in such a way that it constrains innovation, that the rules will not affect positive behavior anyway, and that the competitive market will better solve any concerns.[5] Undoubtedly, the “openness” conditions in the 700 MHz auction were defined at a high level, and were a result of political compromise. Of course, the Commission has long promulgated broad principles or rules to guide industry behavior and then specified particular guidelines or adjudicated on individual bases.[6] Nachbar goes on to claim that the rules were defined, “in a specific, technologically dependent formula,”[7] and that “imposing use neutrality requires addressing questions of design.”[8] This claim is hard to understand, given that the mandate to allow all devices and applications is clearly divorced from particular technologies and indeed is designed to open the possibility to unforeseen technologies. This is the heart of technology-agnostic network modularity. Nachbar would also have us believe that the rules represent only a weak form of Carterphone, which by itself will be ineffective.[9] This ignores the full implications of the open applications provision, which extends the non-discriminatory mandate into the network.[10] It appears that Nachbar and I agree that two-sided openness (user device and network access) would be necessary to encourage meaningful openness, but that we disagree as to whether this can be done through wireless use neutrality.[11] Nachbar instead sees promise in profit-motivated market actors. He makes much of the somewhat competitive wireless carrier market.[12] However, it is clear that carriers all share similar incentives to discriminate against content, and that there is no competitor that offers comparable non-discriminatory service. AT&T recently stated explicitly that its wireless network does not respect network non-discrimination, and that its terms of service – “which are similar to those of other wireless providers” – categorically prohibit all peer-to-peer use.[13]
Ultimately, Nachbar’s critique of the 700 MHz “openness” rules focuses almost entirely on competition-based analysis of the rules (which, even on its own terms, I consider to be deeply misguided). Missing from his analysis is any consideration of whether the 700 MHz use neutrality rules map to historical non-discrimination norms. This is odd, considering his masterful exposition of these norms earlier in his paper. Ultimately, the non-discriminatory considerations in the wireless space are parallel to the network neutrality debate overall, and my conclusions here are essentially the same as my conclusions there. As with wireline, wireless operators face genuine network congestion challenges. Content and application-based discrimination is one way of dealing with these challenges. There are many other approaches – including discrimination that is not content or application-based[14] – that do not present so directly threaten free speech, innovation, and established norms.
[1] Second Report and Order. FCC 07-132. (Rel. August 10, 2007).
[2] "Verizon and AT&T Win Big in Auction of Spectrum," New York Times, March 21, 2008. by Saul Hansell.
[3] Tim Wu, Wireless Carterfone. International Journal of Communication, Vol. 1, p. 389, 2007
Available at SSRN: http://ssrn.com/abstract=962027
[4] Letter from Richard S. Whitt, Google, WT Docket No. 06-150, (October 1, 2007).
[5] Nachbar, 80-89.
[6] Elsewhere, Nachbar endorses precisely this approach. (at. 90)
[7] Nachbar, at 81
[8] Nachbar, at 88
[9] “The rules adopt a version of what has become known as ‘Wireless Carterfone’.” Nachbar, at 81.
[10] To be sure, whether or not this is the case could be disputed. Nachbar’s narrow interpretation is that the provision only limits “the ability of carriers to prevent consumers from loading and running third applications on those openly accessible devices.” (at 81). Even if the 700 MHz rules as adopted did not effectively mandate use neutrality in the network, this does not mean that the approach should be abandoned altogether but rather that such rules should perhaps be more explicitly defined. I am considerably more hopeful that it is possible to do this than is Nachbar. This is essentially the same question that plays out in the broader network neutrality debate, which I discussed earlier.
[11] Nachbar states that, “from a consumer standpoint, the product is the combination of device (or application) and carriage.” (at 82) I agree. I remain confused, however, about why he sings the praises of the Computer Inquiries while maintaining that use neutrality is categorically a bad idea. I am not persuaded by the argument that the IP environment fundamentally different from the circuit-switched environment in such a way that use neutrality is impossible or undesirable.
[12] “But the wireless markets of today are not like the wireline market that AT&T operated in years ago. Today’s wireless carriers face 2 competitors in over 90% of their markets, and therefore have far less market power than AT&T did.” (Nachbar at 82) Nachbar goes on to perform an analysis of the market incentives of wireless operators that I believe is fundamentally flawed on several accounts. He begins by noting the “internalizing complementary efficiencies” phenomenon and claiming that, “If wireless carriers actually do have market power, then opening device and application markets to competition will have no effect on their ability to charge monopoly rents.” (at 82.) Of course, neither of us thinks that wireless carriers are strict monopolists, and thus the ICE exception is irrelevant. On the other hand, these similarly situated companies sometimes resemble an oligopoly, with strong incentives to leverage market power into adjacent markets. Because they face potential competition on price, speed, and device exclusivity, they are motivated to increase switching costs and customer lock-in. I am puzzled by Nachbar’s assertion that "carriers are selling a commodified, undifferentiated service (carriage)," given the ample evidence that carriers are in fact differentiating between content, and Nachbar’s own claim that in IP carriers are motivated to differentiate in a way that they were not in the circuit-switched environment. Nachbar then claims that any market power being exercised is likely coming from the device manufacturers instead, citing the iPhone-AT&T tie-up and the fact that the iPhone has lured many customers to AT&T. Of course, one might just as easily conclude that it was precisely the distorted wireless carrier market that motivated Apple to strike the exclusive deal. In any event, despite the perennial appearance of blockbuster devices, the device market is far more diverse and competitive than the carrier market. Furthermore, the device market continues to move toward open platforms of its own accord, with device juggernaut Nokia announcing the open-sourcing of its operating system on the eve of the launch of Google’s own free and open source “Android” mobile operating system (Nokia. Press Release (June 24, 2008). "Nokia to acquire Symbian Limited to enable evolution of the leading open mobile platform" http://www.nokia.com/A4136001?newsid=1230415. Open Handset Alliance Press Release (November 5, 2007). "Industry Leaders Announce Open Platform for Mobile Devices" http://www.openhandsetalliance.com/press_110507.html). What’s more, it is hard to imagine any strong device leveraging as general-purpose computers increasingly become one of the key devices using wireless internet. Despite such developments, if carriers insist on discriminatory practices, the same bottleneck to innovation remains: use neutrality of government-granted spectrum. Critics of non-discrimination mandates on wireless spectrum raise myriad concerns that such requirements restrict possible business plans. They undoubtedly do. The relevant question is whether or not this benefits or harms overall innovation, growth, and public interest.
[13] Letter from Robert Quinn, AT&T, WT Docket No. 06-150, (July 25, 2008).
[14] Geoffrey Goodell, Allan Friedman, Scott Bradner. "Scarcity, Discrimination, and Transparency: Understanding Network Management" (Paper to be presented at TPRC 2008 Conference, Saturday September 27, 2008).
===
Monday, July 28, 2008
Comcast Order: What to expect Aug 1 and beyond
[Update: Oh snap! The politicking is bleeding into the WSJ Op-Ed page, and the NYT Op-Ed page too. And now we've got the house Republican minority leader trashing the decision on the eve of its announcement.]
The word from inside the beltway was that it was touch-and-go for the last week as to whether or not the order was going to go through. The biggest debate appeared to be complex jurisdictional issues. Comcast has been arguing 1) that the Commission could not enforce the non-binding 2005 "policy statement" without first passing rules and 2) that in any event it lacked statutory authority to intervene based on broadband's "deregulated" Title I status. Martin himself said in 2005 that, "policy statements do not establish rules nor are they enforceable documents." This thrilling administrative law debate went roughly as follows:
Free Press: The FCC has jurisdiction to act, based on 8 different statutory sources. Also, the FCC frequently exercises its power to act via adjudication rather than rulemakings. Oh, and the cherry on top is that Comcast is currently arguing in federal court in California that the FCC does have jurisdiction over these matters.
Comcast: No, those clauses don't give the FCC jurisdiction to act. Plus, Free Press has changed it's story since its original complaint where it asked the Commission to enforce the policy statement. It is now asking the Commission to enforce particular statutes.
Free Press: We haven't changed our story, and it doesn't matter regardless. Oh and by the way, we also have a bunch of legal scholars that say that the Commission has jurisdiction.
Comcast: The part of the statutes you cite are just preambles and statutory statements of "policy", which the D.C. Circuit Court says are "not an operative part of the statute and [do] not enlarge or confer powers on administrative agencies or officers."
Media Access Project: Comcast was warned that the FCC would do this if they discriminated. Oh, and the Sixth Circuit seems to say that there is yet another statutory basis for jurisdiction.
Free Press: Yeah, what they said.
Comcast: We've retained a DC law firm which says that the FCC does not have jurisdiction.
Free Press: I think we've stated our case.
Does this mean that the Net Neutrality crowd "won"?
Getting the FCC to take action on this issue was a battle, but not the war. Many considered it unthinkable that anything would happen on this front until the next administration. This moves the ball in their direction but the game is far from over. Free Press has already posted two things on their blog in celebration.
What did the Net Neutrality crowd win?
It's unclear exactly what is in the order. According to news reports, it doesn't include a fine on Comcast. It probably includes requirements for them to stop doing whatever they've been doing, and to clearly disclose their "network management" practices. This is far from a broad neutrality mandate. The order probably won't lend a great deal of clarity to the question of what constitutes "reasonable network management." There is no broader rule from the commission, which would likely carry more power and clarity than the ad hoc approach of adjudication. There is certainly nothing with the force of statute.
What Happens Next?
Comcast sues the FCC. This will happen immediately. The case will go directly to Appeals court because suits always skip District court when they're appealed from the FCC. Comcast will argue many of the same things they argued in the proceeding. They will discuss how the Brand X case placed broadband clearly within "deregulated" Title I. Title I is essentially the introduction to the Communications Act, and is pretty thin on any details. Comcast will attack whatever statutory ground the FCC claims for its decision. This will come down to questions like whether the Commission can construe general language of Title I to give them authority to take the specific actions in the order, or whether other sections of the Act which appear to apply to other technologies (like common carriers) can actually apply to cable. They might even get down-and-dirty and start talking about the 1979 decision FCC v. Midwest Video Corp. decision that said that the Commission did not have jurisdiction to impose common carriage on cable because cable was a “broadcast” service. Who knows how this will come out, but Declan McCullaugh over at CNet concludes (without much analysis) that the FCC "probably can't police" the order. In the meantime, the stock price of Sandvine (the company that makes Comcast's "traffic shaping" hardware) will probably continue to tank.
But what about nipples?
I'm glad you asked. The recent 3rd Circuit decision invalidating the FCC's finding of indecency in the "wardrobe malfunction" incident actually relates to the current situation. In the decision, the Court found that the FCC's fine was "arbitrary and capricious" and that there was no clear statutory basis nor precedent for the fine. Some of Comcast's arguments are sure to echo this reasoning, and they appear to feel emboldened by the 3rd Circuit decision even though the subject matter is quite different. [Update: Free Press et al. rebut this argument]
Where does this put the Net Neutrality fight?
Neutrality proponents have somewhat more confidence at the FCC when making complaints, and this has whole ordeal has probably put more fear in the hearts of broadband providers that would like to discriminate against traffic. The inevitable appeal provides another stage on which to debate neutrality. However, neutrality proponents don't have as strong of an argument that the FCC isn't doing anything so we need Congress to step in. Still, from all appearances, the next Congress will be far more open to the idea of legislation (although most people don't think it's going to happen in the first session).
Well Steve, what do you think?
Adam Thierer (from think tank Progress and Freedom Foundation) commented that this FCC decision is evidence that the regulatory sky is falling on liberty's head, and I replied. [Update: The PFF crew continues to pile on, with even the ideologically aligned Tim Lee telling everybody to chill out. Hance Haney can't help but get in on the bashing.]
Sunday, July 27, 2008
A Reply to Nachbar
***
Thomas Nachbar has argued that the ideal non-discrimination rule would prevent user-based discrimination but allow carriers to discriminate based on use. Under this regime, providers would be able to choose which services they support (and how they prioritize or discriminate among them) but they would be required to offer the same deal to everyone. Google could not pay for faster delivery than Yahoo. He reasons that user discrimination is easier to define than use discrimination, and less prone to regulatory abuse. He envisions this user-based neutrality as enforced by “standards” and not law or formal rules.[1] Furthermore, he claims that mandating uniform treatment of all packets would discourage applications that require prioritization or quality of service guarantees, making it a type of discrimination itself. To be sure, networks that treat all traffic uniformly make it more difficult to use certain applications. However, Nachbar’s core criticism appears to be not that someone will be choosing how to prioritize, but rather that in some neutrality regimes the government would be choosing. The best entity to choose, on his account, is the last-mile provider.
I disagree. Both use and user non-discrimination should be policy goals. It makes sound economic sense, it is consistent with historical non-discrimination precedent, and supports internet ethos of diverse uses and abundance of peers. Historically, use and user they were closely linked, and non-discrimination in one area could ensure non-discrimination in the other. For example, the Computer II rules mandated only that carriers not discriminate based on the phone number called. However, because of the simplistic circuit-switched technology (and the Carterphone right to attach devices), the rules ensured that use-based discrimination would not occur. Today, user-based discrimination protects only against a subset of harms, which in any event might already be addressable under antitrust doctrine.[2] It does not ensure that carriers support applications that they do not think will be profitable, or that compete with their non-internet offerings,[3] or that have not yet been invented. The problem is that surrendering use-based discrimination to last-mile providers would subject the general-purpose infrastructure in the interest carrier-profit-oriented incentives. In fact, it discriminates against users with business models or non-commercial modes of production that rely on technology uses not approved by the carrier.[4] The technology of the internet presents us with a choice we have not had to make historically because user-based neutrality has always implied use-based neutrality. Nachbar is prepared to give up on use neutrality, while I am not.
One way to maintain use-based non-discrimination by carriers would be to place prioritization control in the hands of the users. Most content/application providers have the opportunity to exercise this control by going to any number of competitive backbone providers. Different backbone providers ensure different levels of quality-of-service guarantees for common metrics like latency, throughput, and jitter (at least, up to the edge of their networks). End users, who are accessing this content or these applications or are connecting with each other in peer-to-peer fashion, do not have the ability to choose different prioritization via competitive providers or by specifying preferences to their provider. Indeed, even across-the-board neutrality may disfavor particular applications users wish to use, although this may be more appropriate and efficient than the last-mile provider’s blanket imposition of prioritization. A better solution would allow end users to easily control the prioritization of their own traffic, within the tier of service that they have purchased from their provider. Such a solution might implement a more sophisticated “Type of Service” style component into some layer of the network protocol, after being defined via a standards group such as the IETF.[5] This approach recognizes that different users have different usage needs, and places the control in their hands. It refuses to foreclose on new uses simply because the network owner did not think of them first, and catalyzes innovation at the “edges.” It is not true to absolute neutrality, but it is true to fundamental principles of non-discrimination and the internet ethos. Such an approach is unlikely to garner initial favor with carriers because it preserves user control, because it nevertheless resolves their “congestion” justification, and because it would take more technical and cooperational work than blunt discrimination. The appropriate policy path to this outcome might involve a use-neutrality mandate on last-mile providers with an exception for user-specified, standards-defined prioritization.
[1] It is unclear what these “standards” might be, other than the existing standards within the internet protocol, which have clearly been ignored in cases such as the recent Comcast/BitTorrent back-and-forth. As such, I am not sure what real force they would bring to bear on the situation aside from the unsustainable ad hoc complaint adjudication that the Commission is currently undertaking.
[2] I am skeptical and discuss this in detail earlier in the thesis.
[3] Although Nachbar seems to think so (eg telephony and video)
[4] eg, p2p
[5] Reed at the Harvard hearing. “There were a wide range of actual standards that would allow Comcast to manage and prioritize traffic, including diffserv, ECN, RED...” http://www.fcc.gov/broadband_network_management/022508/reed.pdf. One might add to this list the RSVP protocol (RFC 2205) and other methods that use flow-based prioritization (such as the method described in J. L. Adams, L. G. Roberts, A. Ijsselmuiden, Changing the Internet to support real-time content supply from a large fraction of broadband residential users, BT Technology Journal, v.23 n.2, p.217-231, April 2005). Some of these tools can be used by network operators to choose their own discriminatory practices, or they might be implemented in such a way as enable user-based control. Early internet engineer David Clark recently remarked (video recording available at http://www.fcc.gov/broadband_network_management/hearing-ma022508.html with quote at 4:24:45)
I don't like the idea of the ISP assigning quality of service to an application. If there is going to be any discrimination in terms of quality of service that's associated with some packets rather than others, I would prefer that the bits which select those packets for enhanced service be set by the user. The user could say 'this telephone call is really important. I want this telephone call to go through.' Imagine that in any given month, ten percent of your traffic could be high priority. You could say, 'this is it, I want it here.' It could be my choice as to whether that's a phone call or a game, or I'm trying to get a bid into eBay or whatever I'm trying to do. I would like the user to be able to assign those priorities. If you look at the way that internet telephony is done today, those bits are set by the phone device. It's not set by the ISP. It's the phone device that says, 'this is a phone call and therefore I will set these bits,' and if the ISP chooses to honor these bits then these packets will go through better. That's something that could be superimposed on top of the basic idea of usage quotas.
***
Saturday, July 26, 2008
Say "No" to Filtered Nationwide Broadband
From the summary:
"The Internet is distinguished by its flexibility as a platform on which new services can be built with no pre-arrangement. While requiring filtering of known protocols in itself raises serious First Amendment conflicts, forcing the blocking of unknown or unrecognized traffic hampers both speech and innovation."
You can read our full comments here, and a blog post about it here.
Wednesday, March 5, 2008
Audio from the Archive
Last year, I produced a little audio montage for one of my courses. I'd completely forgotten about it until one of my classmates asked me for a copy. It's essentially a collection of clips related to contemporary internet policy debates, mixed over music by The Irresistible Force and Fila Brazilia.
You can listen to the MP3 here.
Voices include:
Studs Turkel, Documentarian (2001)
Marshall McLuhan (1967)
J.C.R. Licklider, Internet Engineer (1972)
Bell Labs Speech Synthesis Record (1962)
Bill Moyers (2006)
Senator Ted Sevens, R-AK (2006)
Lyndon B. Johnson (1967)
Edward R. Murrow (performed by David Straithairn)
Newton Minnow, FCC Chairman (1961)
Tim Wu, Columbia Law School (2006)
Yochai Benkler (2006)
Vinton Cerf, Internet Engineer (2006)