Show Less
You do not have access to this content

Code Wars

10 Years of P2P Software Litigation

Rebecca Giblin

With reference to US, UK, Canadian and Australian secondary liability regimes, this insightful book develops a compelling new theory to explain why a decade of ostensibly successful litigation failed to reduce the number, variety or availability of P2P file sharing applications – and highlights ways the law might need to change if it is to have any meaningful effect in future.
Show Summary Details
This content is available to you

Chapter 1: Introduction

Rebecca Giblin

The P2P phenomenon

In 1998 a college dropout called Shawn Fanning wrote a little application called Napster, and the way content was delivered to consumers changed forever. Those few lines of code launched owners of the copyrights in music, movies, books and games into a death battle to protect traditional revenue streams and preserve their exclusive right to new ones. In previous decades they’d felt threatened by plenty of other technologies, including the phonograph, radio, photocopier and VCR, but they had never before faced such a Hydraean opponent. In the next decade, they expensively and lengthily litigated three major actions against P2P providers in US courts, and then a fourth action in Australia. In all that time not a single P2P provider ultimately emerged victorious. Sooner or later, every single one was held liable for the copyright infringements of its users.

Against any one of those predecessor technologies, that kind of emphatic victory would surely have brought about compromise or obliteration. But P2P file sharing software proved different. Unfazed by the overwhelming legal successes of rights holders, software developers continued creating new programs that facilitated file sharing between individual users. By 2007 there were more individual P2P applications available than there had ever been before. The average number of users sharing files on P2P file sharing networks at any one time was nudging 10 million,1 and it was estimated P2P traffic had grown to comprise up to 90 percent of all global internet traffic.2 At that point, rights holders tacitly admitted defeat. Abandoning their long-held strategy of suing key P2P software providers, they closed the chapter on P2P litigation and diverted enforcement resources to other areas, particularly global efforts to persuade or compel internet service providers to police infringing users.

This book tells the story of that decade-long struggle between rights holders and P2P software providers, tracing the development of the fledgling technologies, the attempts to crush them through litigation and legislation, and the remarkable ways in which they evolved as their programmers sought ever-more ingenious means to remain one step ahead of the law. In telling the complete legal and technological story of this fascinating era, the work focuses on answering the question that has so long baffled beleaguered rights holders – why is it that, despite being ultimately successful in holding individual P2P software providers liable for their users’ infringement, their litigation strategy has failed to bring about any meaningful reduction in the amount of P2P development and infringement?

Under the P2P model, all or most of the infrastructure necessary to distribute content – together with the content itself – is supplied by the participating individuals. It is this fact that is at the crux of rights holders’ objections to P2P file sharing technologies. Very often it is their content that is being made available to potentially millions of individual users, without license, and without the payment of any royalty. In the vernacular of P2P providers, this is known as file sharing. The owners of that content would prefer to call it stealing.

At times, the US music industry has taken direct enforcement action against some of these individuals, hoping that the astronomical statutory damages available under US law, ranging between $200 and $150 000 per infringed work, might deter future infringers.3 From 2003–2007, members of the industry “filed, settled, or threatened” lawsuits against more than 20 000 individuals.4 Illustrating the enormity of the campaign, just 2084 civil copyright suits were instituted in total across the whole of the US the year before the Recording Industry Association of America’s (“RIAA”) campaign commenced.5 However, by any measure, it was not a success. Missteps by the RIAA helped turn the campaign into a public relations nightmare. (One memorable case involved a Mr Larry Scantlebury, war veteran and grandfather of three, who passed away during the course of the litigation against him. The plaintiffs sought to continue the suit postmortem against his children – albeit after a 60-day stay to allow them “time to grieve”.6) There was relatively little support for the suits, and only a tiny percentage of the actions made it to trial. And even those few that resulted in the sought-after awards of statutory damages can’t be described as untrammeled successes. The most notorious involved a Minnesota single mother of four, Jammie Thomas, who was sued for sharing 24 songs via the Kazaa P2P file sharing program. Massive publicity followed the farcical situation as statutory damages of over $222 000 US were awarded at her first trial, upped to $1.92 million by the jury at a second, slashed to $54 000 by the trial judge’s exercise of remittitur (being the maximum that he considered not to be “monstrous and shocking”,7 and then upped back up to $1.5 million by yet a third jury.8 In a second case, undergraduate student Joel Tenenbaum was sued for infringing the copyright in 30 songs, and the jury awarded $675 000 in statutory damages.9 The District Court judge subsequently found this to be “unconstitutionally excessive”, and substituted an award of $67 500 in total.10 As of early 2011, both matters were still being appealed.

The publicity over these and other cases triggered public questioning of the industry’s motives and business model, and escalated growing discontent from its customer base. The loss of goodwill brought about by the direct litigation campaign might have been acceptable collateral damage had it been effective, but it did not bring about any reduction in the amount of file sharing. Indeed, despite the unprecedented number of lawsuits initiated or threatened, P2P infringement actually appeared to increase over the relevant period.11 In late 2008 the music industry abruptly announced an abandonment of its mass litigation strategy against end users (although unfortunately for Thomas and Tenenbaum, no abandonment of existing suits).12

The failure of the direct litigation campaign was predictable in advance. It has long been recognized that liability imposed directly upon wrongdoers will sometimes be ineffective.13 Professor Reinier Kraakman argues that this will be the case where “‘too many’ wrongdoers remain unresponsive to the range of practicable legal penalties.”14 That was precisely the case for P2P file sharers: the number of participating infringers was so high, the price of pursuing them so costly, and the chances of their being apprehended so remote, that the threat of direct infringement – even with the possibility of astronomical penalties – left individual infringers largely unmoved.15 Where direct liability will be predictably ineffective, the “standard legal response” is to seek a remedy by targeting the intermediaries or “gatekeepers” responsible for committing or enabling large-scale infringement.16 As Professor Tim Wu has pointed out, until recently, copyright law was “entirely dependent on gatekeeper enforcement”:17

. . . [C]opyright law achieved compliance through the imposition of liability on a limited number of intermediaries – those capable of copying and distributing works on a mass scale. The gatekeepers were book publishers at first; later gatekeepers included record manufacturers, film studios, and others who produced works on a mass scale. Their role resembled that of doctors with respect to prescription drugs – they prevented evasion of the law by blocking the opportunity to buy an infringing product in the first place.18

Traditionally, rights holders had considerable success in using legal doctrines based on these principles of gatekeeper enforcement to shut down activities that facilitated copyright infringement, whether they were swap meets whose proprietors tacitly permitted vendors to sell infringing records, dance halls whose operators didn’t secure licenses allowing visiting bands to perform copyrighted music, or advertising agencies who created campaigns for purveyors of “suspiciously” cheap records.19 Such enforcement efforts were also successful in deterring many later market entrants from engaging in the kinds of conduct that had previously resulted in liability, and thus further limiting eventual third party infringement. When they commenced their 10-year struggle to apply the same principles to P2P software providers, rights holders undoubtedly expected to achieve the same outcome.

A unique vulnerability to anti-regulatory code

To start to understand why those lengthy, expensive and ultimately successful efforts to shut down individual P2P file sharing technologies had little or no impact on the current availability of file sharing software, it is necessary to understand something about the unique properties of software code. For some time now it has been recognized that code can have regulatory effects – or, as Professor Lawrence Lessig famously put it, that “code is law”.20 As he explains, “[t]he software and hardware that make cyberspace what it is constitute a set of constraints on how you can behave”.21 For example, software code may regulate behavior by imposing a password requirement on users seeking to gain access to a particular service.22 Historically, rights holders have used a variety of code-based measures as part of their efforts to promote compliance amongst end-users, with the most notable example being Sony’s disastrous rootkit experiment.23 In the P2P file sharing context, however, the idea that code regulates is less significant than the separate but related idea that code can be anti-regulatory in effect. Wu explains that “the reason [why] code matters for law at all is its capability to define behavior on a mass scale. This capability can mean constraints on behavior, in which case code regulates, but it can also mean shaping behavior into legally advantageous forms.”24 Wu analogizes such anti-regulatory programmers to tax lawyers. “[They look] for loopholes or ambiguities in the operation of law (or, sometimes, ethics). More precisely, [they look] for places where the stated goals of the law are different than its self-defined or practical limits. The designer then redesigns behavior to exploit the legal weakness.”25

As the following chapters will demonstrate, post-Napster P2P developers engaged in precisely this kind of behavior, routinely seeking to code their software in ways that sidestepped the limits of the existing law while nonetheless still facilitating vast amounts of infringement. This book explores the great lengths they went to in their efforts to fall outside the strict letter of existing secondary liability formulations, including by coding their software to utilize encryption, to eliminate liability-attracting centralization or to facilitate copying in unanticipated new ways. Some of these strategies enjoyed a remarkable degree of success – for example, Chapters 3 and 4 will demonstrate that those behind the Grokster and Morpheus file sharing applications were so successful in coding their way out of liability that the US Supreme Court had to create a new legal doctrine to defeat them. Such P2P file sharing technologies highlighted for the first time the copyright law’s peculiar vulnerability to attack by anti-regulatory code. However, the reasons for that vulnerability remain largely unexplored.

The best explanation to date comes from the groundbreaking article “When Code Isn’t Law”, in which Wu identifies two reasons for this susceptibility. The first is the law’s longstanding reliance on gatekeeper enforcement mechanisms, which was introduced above. Gatekeeper enforcement schemes are premised on the idea that relatively few people are capable of widespread copying and distribution.26 Thus, as Wu explains, they “have an obvious weakness: They depend on a specialized good or service remaining specialized.”27 P2P file sharing technologies subvert that assumption by placing the ability to efficiently and cheaply distribute books, movies, music and other content in the hands of individual consumers. The second reason was the dearth of normative support for the law from individual users. Wu’s reasoning on this point was based on empirical studies that suggested individual end users had a widely held belief that copying copyrighted material for a friend was acceptable, whereas selling it on a commercial basis was not.28 Wu argues that P2P file sharing applications “brilliantly” exploit this distinction between commercial and non-commercial copying:

P2P clients create no sensation or impression of stealing . . . Instead, the user is invited to a “community” of peers who exchange song files. A user, importantly, has no sense that she is “selling” copyrighted materials. The design therefore exploits the distinction between the acceptance of non-commercial copying and the non-acceptance of commercial copying. While the economic consequences of peer filesharing could be large, the superficial absence of commercial exchange makes filesharing more acceptable under the norms of home copying.29

Thus, by eliminating gatekeepers, and by exploiting the fact that many individuals don’t have any ethical problem with “sharing” content with others online, Wu argued that P2P software providers have sometimes managed to avoid the law’s traditional enforcement measures.30

A new reason for that vulnerability

This book puts forward a third reason for that vulnerability, which not only explains why the pre-P2P secondary liability law proved so peculiarly unsuited to the task of dealing with purveyors of anti-regulatory code, but also why even successful litigation against providers of P2P software has failed to curb their spread. It is premised on the fact that software is radically and fundamentally different from physical world technologies in a number of different ways. As the following chapter will demonstrate, the US pre-P2P secondary liability law evolved from decades of decisions relating almost exclusively to physical world scenarios and technologies. Necessarily, the resulting principles were based on certain assumptions that had long proved correct in the physical world paradigm. This book is premised on the idea that there is a gap between those physical world assumptions and the realities of P2P software development, which it dubs the physical world/software world divide. It argues that, by failing to fully recognize the unique characteristics that distinguish software code and software development from their physical world predecessors, the law has been and continues to be vulnerable to exploitation by those who understood that those traditional or physical world assumptions do not always hold good in the software context.

Legal scholarship has touched upon the distinctions between software worlds and physical worlds in several different contexts, particularly in considering whether and how computer software should be provided with patent and copyright law protection,31 and while considering the jurisdictional and choice of law difficulties associated with enforcing laws in cyberspace.32 However, their implications remain poorly understood. It is not surprising that we are slow to acknowledge the revolutionary properties of code. Katsh explains that, historically, new technologies are frequently “perceived not as something with unique characteristics that will create new institutions and change old ones, but rather as something that simply extends the capabilities of . . . existing technolog[ies].”33 Thus “early films were labeled ‘moving pictures’ and were not immediately understood to be a new art form”;34 “the first cars were called ‘horseless carriages’ and looked as though they were designed to be pulled by a horse”;35 and early personal computers “were called ‘typewriters with memory.’”36 As Katsh explains, the danger of equating unlike technologies is that it may “mask the revolutionary character of the new technology”.37 In turn, this can lead to legal standards that miss their targets because they fail to take into account the properties of the new innovation that make them unique.

This book will argue that this is precisely what has occurred in the P2P file sharing context. It begins by identifying four main physical world assumptions that lie at the heart of the pre-P2P US secondary liability law. The gap between those assumptions and the realities of P2P software development is explored as the book progresses, providing a third explanation for the secondary liability law’s vulnerability to anti-regulatory code, and satisfactorily explaining for the first time why the litigation strategy against P2P providers was ultimately unsuccessful in bringing about any meaningful reduction in the amount of P2P development and infringement.

Since this theory focuses inquiry on the characteristics of software code that make it different and unique as compared to physical world equivalents, it’s necessary to conceptually separate software from hardware. Software refers to the “programs and procedures required to enable a computer to perform a specific task”,38 while hardware is the physical equipment necessary to execute software’s commands. The definition of “code” adopted in the existing legal literature typically conflates the two by defining “code” as the “information technology architecture,” or “the hardware and software,” that constitutes a particular technology.39 It is easy for these lines to become blurred, as software is increasingly incorporated in much of the hardware we use in day-to-day life, including MP3 players, personal video recorders, cars, microwave ovens, and so on. However, it is necessary to separate them in this context, since the equation of hardware and software risks masking the unique characteristics of software code on its own account, and particularly the ways in which it differs from the physical world technologies that came before it.

Physical world assumptions

Everybody is bound by physical world rules

This first assumption is the most abstract and poorly understood of the four, and understanding it requires delving into some of the conceptual differences between software worlds and physical worlds.

Consider what we know about the physical world. We have an immediate and “intuitive” understanding about how it works.40 “Apples, when released fall down, not up. Actions are causally related to consequences. We expect things to behave sensibly. Our intuitive notion of what is ‘sensible’ is based on common-sense experiences, learned from earliest childhood, and rooted in the physical world.”41 As Katsh explains:

In the ‘real world,’ time and space are ever-present constraints, with the laws of physics frequently limiting many of our desires to do something or be somewhere. The list of constraints to which we accommodate ourselves is significant. We respect the laws of gravity. We understand that no more than one object can occupy the same place. We recognize that we can only be in one place at one time and that there are some places we cannot go to because there is not enough time or because they are too far away.42

What is less well understood is that physical world rules do not necessarily apply to software. In fact, neither the laws of physics nor any other “law

[or] principle known in the physical world” has any application in the virtual context.43 As Professor Joseph Weizenbaum explains:

There is a distinction between physically embodied machines, whose ultimate function is to transduce energy or deliver power, and abstract machines, ie, machines that exist only as ideas. The laws which the former embody must be a subset of the laws that govern the real world. The laws that govern the behavior of abstract machines are not necessarily so constrained. One may, for example, design an abstract machine whose internal signals are propagated among its components at speeds greater than the speed of light, in clear violation of physical law.44

Unbound by physical world rules, software code is incredibly malleable. Indeed, Professor James Moor identified that “logical malleability” as software code’s revolutionary characteristic.45 The medium’s inherent freedom and flexibility led Weizenbaum in 1976 to famously describe computer programmers as “creator[s] of universes . . . of virtually unlimited complexity.”46 That unrestrained capability can, of course, be reined back by other code. An operating system, for example, can impose limits as to how software designed for that platform must operate. However, the internet was deliberately designed to be as free and open as possible for future developers and, as a result, developers of internet-based P2P file sharing programs have very few code-based constraints upon them.47 All of this means that entities in a software world “can be made . . . to overlap, interconnect, and interact in ways that are not possible or feasible in the physical world.”48 Programmers can write software that will do things that are simply not possible or feasible when limited by physical world constraints.

As Chapter 2 will demonstrate, copyright law evolved in response to decades of litigation involving physical world scenarios and technologies. The intuitive and unacknowledged understanding that we all have of the physical world’s constraints has played a large role in informing the law’s response to those scenarios. There can be no doubt that judges must sometimes have been influenced by unspoken and unacknowledged assumptions that if certain things were infeasible, impossible or impractical in the physical world, they were infeasible, impossible or impractical full stop. Since these assumptions did hold good in the physical world context, the secondary liability law long worked well, and secondary infringements were limited, being “for the most part, crude, marginal transactions, the subjects of swap meets and unlicensed kiosks.”49 As the following chapters will demonstrate, however, secondary liability principles based on the assumption that physical world rules apply can result in unanticipated outcomes when applied to situations where they simply do not. For example, a law that implicitly assumes that knowledge of a wrongdoing will be a natural corollary of a defendant’s culpability may struggle to respond to a defendant that utilizes encryption software to eliminate such knowledge. This might be the kind of phenomenon that Mitch Kapor and John Perry Barlow were hinting at when they observed in 1990 that “the old concepts of property, expression, identity, movement, and context, based as they are on physical manifestation, do not apply succinctly in a world where there can be none.”50

Developing and distributing distribution products is expensive

The final three assumptions identified in this work are less abstract, and flow on closely from one another. The first of them relates to expense. As Professor Jessica Litman has observed, “[o]ur copyright law was designed in an era in which mass distribution of copies of works required a significant capital investment.”51 There can be no doubt that the creation of physical world distribution technologies capable of vast amounts of infringement, such as printing presses, photocopiers, and VCRs, typically requires large investments in research, development and infrastructure.52 Even if the initial invention of a physical world distribution technology is achieved cheaply – and history is filled with examples of hobbyist inventors on shoestring budgets making amazing breakthroughs53 – developing it to market, mass-manufacturing, promotion and delivery all require considerable amounts of cash.

The sizeable investment necessary to develop, manufacture and deliver a physical distribution technology creates high barriers to market entry that limit the number of manufacturers to relatively few – a fact that has long made it easier for content owners to enforce their rights against secondary infringers. One of the reasons that the copyright law evolved to rely on gatekeeper enforcement measures, as outlined above, was because these factors prevented end users from participating in widespread dissemination of copyrighted works. As Professor Jane Ginsburg explains:

Copyright owners have traditionally avoided targeting end users of copyrighted works. This is in part because pursuing the ultimate consumer is costly and unpopular. But the primary reason has been because end users did not copy works of authorship – or if they did copy, the reproduction was insignificant and rarely the subject of widespread further dissemination. Rather, the entities creating and disseminating copies (or public performances or displays) were intermediaries between the creators and the consumers: for example, publishers, motion picture producers, and producers of phonograms. Infringements, rather than being spread throughout the user population, were concentrated higher up the chain of distribution of works. Pursuing the intermediary therefore offered the most effective way to enforce copyright interests.54

A further corollary to the large investment necessary to create such technologies is that their providers are likely to be easily identifiable and deeppocketed, making them attractive defendants in the event they step out of line.

Distribution technologies are developed for profit

The next assumption is that distribution technologies are developed for profit. As Professor Jonathan Zittrain has observed, “[b]efore the advent of modems and networks, major physical-world infringers typically needed a business model because mass-scale copyright infringements required substantial investment in copying and distribution infrastructure.”55 Thus the assumption that developers of distribution technologies would do so for profit was inextricably tied to the large investments that were considered to be an integral part of developing and distributing it in the first place: once that initial investment had been made, there was strong motivation to obtain some financial return.

This traditional need to make a massive investment and then to recoup those expenses has significant implications. As Paul Ganley has explained:

The normal phases of R&D, product design, manufacture, unit testing and distribution all help to constrain the wilder excesses of copyright infringing potential. The inherent checks and balances in the structure of legitimate businesses help to ensure that companies will shy away from such costly and time consuming exercises if they believe there is no legitimate avenue for them to recoup their substantial investment.56

This assumption was reflected in various theories of secondary liability for copyright infringement. It is most explicit in the vicarious liability doctrine, of which one element is a “direct financial interest” in the infringement.57 However, the imposition of contributory liability has also often appeared to have been inspired largely by the profit motives of the defendants.58

Once again, this assumption worked to keep the total number of providers relatively small. It also kept them in line. Few providers were inclined to skirt the edges of the law too closely, since litigation by aggrieved rights holders would cut dramatically into their anticipated profits.

Rational developers of distribution technologies won’t share their secrets with consumers or competitors

The final relevant assumption is that providers of distribution technologies won’t share the secrets of their inventions. This follows on closely from the assumption that distribution technologies are expensive to develop. Having spent that money to research, develop, manufacture and distribute a technology, the provider has no incentive to share that technology with its competitors. Again, this is one of the reasons why the gatekeeper-enforcement regime worked so well before software distribution technologies emerged. The disinclination to allow technologies to be copied further limited the number of technology providers, and enabled gatekeeper-based laws to effectively keep them under control.

But this is getting ahead in the story. These assumptions and the gaping mismatch between them and the realities of P2P software development will be revisited a little later. For now, a better starting point is right back when people first started to get really interested in making music available online.

Evolution of the revolution

The online music equation

Since digital computers were first invented, people have gone to incredible lengths to make them play and share music. When MIT hacker Peter Samson was given access to a $3 million US computer in the 1960s, along with virtually unlimited possibilities for its use, he honed in on its single audio speaker – a basic device lacking any controls for pitch, amplitude or tone – and convinced it to output music.59 As computing technology became more accessible, the demand for ways in which to play and share music online grew too. In 1993 a handful of college students founded the Internet Underground Music Archive (“IUMA”), which went on to become a pioneer of internet music distribution. Inspired by the failure of one of their number to sign his band to a major label (despite, or perhaps because of, such musical offerings as “Cold Turd on a Paper Plate”), the IUMA sought to make niche music available to a larger audience. The service offered online hosting of music and information on its website on behalf of unsigned bands in exchange for a small fee. Files were compressed using a technology known as MP2, which reduced files to manageable sizes by sacrificing sound quality. Although download times were long, the music obscure and the fidelity poor, IUMA rapidly gained popularity around the world. “Even when traffic was minimal, music clips were being downloaded from as far away as Russia – an appealing prospect to bands unaccustomed to being heard outside their hometowns.”60

At the same time that the IUMA was demonstrating the demand for online music distribution, an immense repertoire of unsecured digital music was being quietly built up, courtesy of the recording industry’s shift from vinyl and tape to compact disc. Industry executives had made a fateful decision not to incorporate digital rights management technology into the new format: the massive size of digital music files, prohibitive cost of CD-burning technology, and slowness and scarcity of internet connections had led insiders to conclude that widespread unauthorized copying would never be an issue. After all, the standard baud rate of dial-up internet in the early 1980s was around 2400 bits per second. Assuming the download remained constantly at this maximum speed and the connection never dropped out, it would still take a user around a month to download a standard 650MB CD. As well as being time consuming and unreliable, downloading that much music was likely to cost far more (through internet access fees) than simply purchasing the CD from a record shop.

By the late 1990s, however, the equation had changed. Powerful desktop computers had become cheap enough for extensive business and home adoption. The development of the World Wide Web and search engines, plus faster and cheaper access plans, had made the internet more accessible than ever before. And the cost of data storage fell through the floor – from $16.25 US per megabyte in 1981, to $0.003 US per megabyte by 1996.61 The last remaining barrier to widespread unauthorized online distribution of top quality digital music files was their prohibitively large size. This was overcome in 1996, when German company Fraunhofer Schaltungen made widely available a technology it had developed during the previous decade.62 Called “ISO-MPEG Audio Layer-3”, or “MP3”, the technology enabled CD-quality sound files to be compressed by a factor of 12 with minimal compromise of the original sound quality.63 By that point, all of the elements necessary to make the online distribution of music a practical proposition were in place.

Joining the dots

By early 1997 a few dozen individuals, mostly college students, had begun to host websites offering a range of copyrighted popular music for free download to anyone who stumbled across them on the web. One of these pioneers was David Weekly, a Stanford student, who hosted his site courtesy of his school’s internet connection. Although his site hosted unauthorized versions of copyright-protected popular music, Weekly later explained that he was not motivated by commercial considerations. “None of us really had a patent interest in illegally copying music; we were simply blown away by the ‘cool factor’ of the new medium.”64 Demand for the free digital music on Weekly’s site was massive: within a week and a half, it had become responsible for 80 percent of his campus’ outgoing internet traffic.65 Shortly afterwards, Weekly received a call from his university’s network security branch. A record label had complained that he was distributing music in breach of copyright and, together with dozens of others, his site was shut down.66 But the online music movement kept gaining momentum. A website hosted at the domain received 10 000 hits the day it launched, even though it had not been advertised and despite the fact it didn’t host a single piece of music.67 And when a teenaged programmer from Arizona wrote a program to play MP3s, it was downloaded 15 million times in just 18 months68 before eventually being bought by AOL for a reported $100 million US.69

Despite the incredible level of demand that these stories demonstrate, the RIAA resisted the trend towards online music. It stonewalled early efforts to legitimately distribute popular music online,70 sought to prevent importation of an early portable MP3 player,71 and tried to lock music up by working with technology companies to develop ways to “protect the playing, storing, and distributing of digital music”.72 Most significant to the eventual development of P2P file sharing technology, however, it sought to shut down every website it could identify as hosting unauthorized MP3s.

As soon as MP3s started appearing on websites, the RIAA hired investigators to identify infringing sites. Then it issued ultimatums: remove the infringing content or face legal action for infringement. From 1998, such takedown demands became formalized under the Digital Millennium Copyright Act’s (“DMCA”) newly introduced safe harbor provisions, which are contingent on the “expeditious” removal of allegedly infringing content upon receipt of notice.73 Matt Oppenheim, then the senior vice president of business and legal affairs for the RIAA, estimated that, by June 2003, copyright owners had sent more than half a million DMCA “cease and desist” notices.74 Such takedown demands often had the desired effect, particularly where the sites were being unknowingly hosted by universities or corporations. These hosts, analogous to the traditional gatekeepers identified in the previous chapter, generally cooperated by quickly removing the offending content.75 This is precisely what had happened to David Weekly, the student whose Stanford-hosted MP3 site had been so enthusiastically embraced by the internet-using public. Where the RIAA’s notices were disregarded, the music industry was known to back up the threat with action. In 1997, for example, rights holders filed three lawsuits within a 24-hour period against unnamed defendants, alleging that there was infringing content on their websites. After preliminary injunctions were granted, the unauthorized music was quickly removed from those sites.76

Some individuals reacted to this campaign by developing ways of lessening the effect of the takedown strategy. Most of the work in creating a website lies in the original coding of its design and layout. Once that has been done, it is a simple matter to add or edit content, or relocate the entire site elsewhere. Taking advantage of these characteristics, many providers of infringing MP3s began distributing their offerings via a system of multiple sites. One site would list and provide hyperlinks to available songs and other content, but not itself host any infringing songs. The music itself would be hosted at a completely separate location, typically one of the many free quasi-anonymous web-hosting facilities that were being launched around the same time. Users who clicked the links could seamlessly save the relevant content regardless of where it was hosted. When the inevitable cease-and-desist letter reached the host of the content it would quickly be removed. However, its providers would then simply upload the MP3s at a new location (or find other copies that were already online), update their links to reflect the changed locations, and have the music available again almost immediately. Indeed, the entire enforcement process was likely to be considerably less expensive and time consuming for the distributors of the infringing content than for the RIAA itself.

Nonetheless, the RIAA’s strategy enjoyed a significant amount of success. Because of the inevitable time lag between the infringing MP3 files being removed and the links pages being updated, attempts by internet users to download files were often met with “file not found” errors. This transformed the process of downloading music via the web into a time-consuming and frustrating experience. “[T]here were no easy, continuous, reliable sources for pirated music on the Net at large.”77 Such were the number of fruitless searches, Professor Stuart Biegel observes, that “many commentators predicted that the controversy was ending and that the RIAA had won.”78 At this stage, Zittrain argues, the music industry had “battled at least to a stalemate, if not better”.79

Changing the rules

But that situation soon changed dramatically. While the RIAA’s enforcement tactics probably frustrated some users into returning to traditional record stores, others persevered, following link after link in search of one that had not yet been disabled. One user happened to complain to his college roommate about this frustrating glut of dead links.80 That roommate, Shawn Fanning, reasoned that the shortcomings of existing online music distribution could be bypassed by developing an application that maintained a fluid index that could tell users what music was available at any given moment.81 It would be far less vulnerable to the notice-and-takedown regime because the content would be hosted by the individuals that wanted to share files and would go on and offline as they did; and its real-time structure would make it impervious to the scourge of dead links. Fanning put these ideas into execution via a program called Napster, releasing the first beta version on 1 June 1999.82

How Napster worked

Napster users were required to nominate a folder on their computer in which to store downloaded music. Unless the user expressly opted out, the content stored in this folder would be scanned for MP3 files each time the user connected to the service, and information about those files would be added to a central index maintained on Napster’s servers. However, at no stage did the service itself copy the music files. Napster’s user interface, depicted in Figure 1.1 below, allowed users to search for desired content using a number of search fields, including artist name and song title, file size, bit rate and other characteristics. A higher bit rate usually meant a better fidelity sound, and a correspondingly larger-sized file.

When a user entered a search, the parameters would be transmitted to the Napster servers, which would compare them to the information contained in the index and return a list of results.83 Once a desired file was located, users could request a copy by double-clicking the file name or selecting it and clicking the “Get Selected Song(s)” button at the bottom of the screen. Upon receiving that request, Napster servers would query the host user to ascertain whether or not it was willing and able to send that file. If it was, Napster would communicate the IP address and other relevant details of the host user to the requesting user.84 At that point, Napster’s role in the transaction would be complete, and the actual transfer would take place directly over the internet between the hosting and requesting users.85 As soon as a user disconnected from the Napster service, the central index would be updated to reflect the change to the available content. This system of dynamic updating meant that Napster users, unlike those downloading music from the web, had no problems with broken or outdated links. The Napster service relied on central servers to give users a fixed point on the internet to which to connect and to facilitate their searches. This meant that individual users could only connect to the network if Napster’s servers permitted them to do so, and had to communicate with those servers to obtain any information about files currently available on the network. However, once those servers provided information as to the location of desired files, Napster’s architecture enabled data to be transferred directly between individuals. This represented a revolution in the way in which data was transmitted online. Before Napster popularized P2P communications architectures, most common internet transactions utilized a client-server model. In client-server relationships, the server controls the provision of both access and content. Clients have no input as to what information will be made available, or who will be able to reach it. Examples of client-server relationships include the World Wide Web (which is exclusively accessed through web servers) and email systems (which utilize servers to deliver ingoing and outgoing mail). The widespread adoption of client-server architectures had been largely driven by the practicalities of IP address allocation. Every computer on the internet has a unique internet protocol or “IP” address, which allow them to be distinguished from one another. In the early days of the internet, when relatively few machines were connected, virtually all internet-connected computers had static IP addresses, which remained the same each time they connected to the internet. The benefit of a static IP address is that you can always find the same resource at the same location. However, the number of available IP addresses is finite, and when more people began using the internet it became impracticable to allocate a dedicated address to every device. Thus a system of dynamic IP addresses developed, whereby internet service providers (“ISPs”) were assigned a pool of IP addresses that could be allocated to their users as needed. An internet user with a dynamic IP service is likely to have a different IP address or internet location each time they go online or reboot their modem. Because the IP addresses of individual users tend to change rapidly, causing users to blink in and out of the internet network at different points, they are referred to as existing at “the edges of the internet”.86 Such uncertain connectivity long limited the ability of many users to host, share and distribute information. But Napster changed all that. It exploited those underutilized resources by taking advantage of the computer power and internet connections of their highly transient and unpredictable members, and its users effectively functioned as both clients and servers, in that they both requested content and distributed it to others. The technical relationship between Napster and its users is graphically depicted in Figure 1.1.

This architecture allowed Napster Inc to maintain accurate real-time indexes of available files, to facilitate communications between users and to respond quickly to searches. It also scaled effectively: the additional demand created by increased numbers of users could be handled by simply adding more servers to the central array. However, the model also has some noteworthy drawbacks. For one thing it was relatively expensive, since Napster Inc was obliged to purchase the server hardware required to power it – a cost that grew in line with the service’s popularity. Perhaps the most notable downside, however, was the network’s vulnerability: its central servers gave it a single point of failure, and if that switch was flipped the entire operation would disappear in an instant.

These disadvantages were of little concern to users, who were much more interested in the service’s ready availability of free, high-quality music files. Word of the new Mecca for infringement spread quickly. The more people connected, the more music became available on the network, and the more attractive and popular the service became. Indeed, its popularity and illicit use became such that Zittrain described it as “the open air drug market of copyright infringement”.87 Aghast at this sudden torrent of infringement, and unable to shut it down via the usual tactics, in December 1999 18 members of the Recording Industry Association of America sued Napster Inc for copyright infringement.

Figure 1.1 The Napster network topology