Intellectual Property: The Yin and Yang of Copyright and Copyleft
If you have been watching TV or reading the news lately, you may have come to the conclusion that the open-source movement (which promotes collaborative, free-to-use software) has made a breakthrough. “Open source” is everywhere. A few examples: The United Nations has announced it is using Linux, the main open-source operating system, to rebuild Afghanistan; the Dean and Clark campaigns have both embarked on open-source projects; and during the NFL playoffs I personally viewed the IBM Linux commercial 87,000 times. There is now even an open-source cola, with the recipe printed on the label, as well as an open-source song with lyrics set to the Bulgarian folk classic “Sodi Moma.”
Linux is now the fastest growing operating system in the world, and it is projected to have 45 percent of the global market share by 2007. Considering its beginning as an obscure piece of source code released to an even obscurer group of developers on the Internet, this is in many ways an astonishing development, and one that is having a significant impact on the software industry. Even Microsoft, the standard bearer of proprietary software, has been forced to respond; they’ve reduced prices in some markets, shared code with governments and corporate clients, and launched a major new ad campaign on the benefits of running Windows versus Linux.
While the Microsoft-versus-Linux story has generated a lot of controversy, many information technology experts see the competition between open-source and proprietary software as a positive sign for the software industry, as increased competition serves to promote innovation and consumer choice. Both open-source and proprietary software have their pros and cons, and the market is big enough for both of them. And their coexistence encourages the development of new technologies, making it easier for more people to find the exact products that best fit their particular needs.
The debate pitting open-source against proprietary software, however, goes much deeper than this. For many, it is a key battle in the larger struggle between those touting the libertarian ideals on which the Internet was founded and those seeking to preserve in the digital age traditional privileges and property rights. The resolution of this debate will, according to partisans on both sides, have a tremendous impact on not only many concrete issues, such as intellectual property rights, business and licensing models, and government procurement, but also on the future of the Internet, global development, and to a more limited extent, free society and democracy.
Yet how this debate is resolved may not be as important as what governments do in reaction to it. In response to the hyped-up rhetoric coming from both sides of the debate, governments, at break-neck speed, are proposing, and in some cases adopting, extreme measures to either reduce the procurement and use of proprietary technologies or to reinforce the legal standing of intellectual property-rights holders. It is these measures that, if implemented, will have the most potential to limit the benefits of the Internet in the future, as the results of governments’ efforts to “pick the winners” could have disturbing consequences. As the history of open source will illustrate, competition in this area has developed without the benefit of government intervention. Clearly, if open source can become such an important movement all on its own, it doesn’t need government help to become a vital player on the open market.
i. free software & hacker utopia
Although the terms “free software” and “open source” were not coined until the 1980s and ’90s respectively, the movement really began with the advent of computing in the late ’50s among developers at the Artificial Intelligence Labs at the Massachusetts Institute of Technology. This community of developers, referred to as the “MIT hackers” (at the time “hackers” simply described creators of computer systems, rather than people who break into systems), shared code and tools among themselves as part of their work on the Department of Defense’s Advanced Research Projects Agency (DARPA) contracts.
The first iteration of the Internet, which was called ARPAnet, emerged from this work, and it had a profound effect on the software community. “Its electronic highways brought together hackers all over the U.S. in a critical mass,” as one history of the movement puts it. This community of sharing lasted through the early ’70s, as developers worked on a myriad of important projects, including the source code for Unix, which became the first operating system for the Internet.
This open atmosphere began to shift in the late ’70s, as several incidents contributed to significantly change the programming community and set the stage for the current debate between proprietary and open-source software. First, as software development became increasingly commercial (partially spurred on by the invention of the personal computer), companies began to protect their software with traditional proprietary licensing models. For example, as competition grew, AT&T, the Unix license holder, turned Unix into a commercial product for the first time. This meant the source code was no longer open to developers unless they signed a “non-disclosure” agreement.
Second, the Artificial Intelligence Labs at MIT discontinued use of the PDP-10 computer, a cornerstone of that community and its work. All the programs developed by the MIT hackers were based on language particular to the PDP-10, and so the decades of shared code developed through the MIT lab were no longer useful. The newer computers all ran
proprietary operating software, making a cooperative community impossible.
This, in turn, prompted the third crucial event of the time. A young developer working at the MIT labs, Richard Stallman, who had vowed never to sign a non-disclosure agreement, decided that rather than migrate to private industry, as many of his colleagues had done, he would resign from MIT and develop his own “free” operating system, which he named GNU (which stands for “GNU’s Not Unix”). This resulted in the creation of the free software movement, as well as development of the General Public License (GPL), otherwise known as “Copy-Left.”
The term “free software” is something of a misnomer, in that most people assume it means the software is free of charge. Actually, what is meant is that the software’s license allows the user freedom to access, modify, or otherwise use the source code of the software. Or, as Stallman often says, “Think of free speech, not free beer.” Software is considered “free” if it meets four criteria: if the program can be run for any purpose; if it can be changed to meet particular needs; if it can be distributed to other developers, neighbors, etc. for their use; and if the program can be published so others can help to improve it. Also, in order to be considered free software, it must be licensed under the GPL or Copy-Left, which mandates that the work will always be open or free, regardless of how it is modified in the future.
ii. linux is born
By 1991, Stallman and the fellow members of the Free Software Foundation (established in 1985) had developed almost everything needed for a free operating system, except for the core code of the program, which is called a kernel. Here enters Linus Torvalds and the Linux development model.
A student at the University of Helsinki, Torvalds in 1991 took a Unix-like operating system called Minix and developed a kernel out of it, which he called Linux. Rather than work alone to develop this into a full operating system, he decided to release it on the Internet. Throughout the ’90s, thousands of programmers from around the world worked to combine elements developed by the GNU and others into a complete open-source operating system that we now know as Linux (to fully give credit to the major contributors of the system, the full name should be the GNU/Linux).
iii. free software vs. open source
The term “open source” was created as part of a marketing campaign designed to bring the free software movement into the mainstream. On April Fool’s Day, 1998, Netscape released the code behind its Internet browser to the public, which suddenly brought the confrontation between proprietary software and “free software” into the public sphere. As Eric Raymond, one of the founders of the open-source movement, described it in his article “Keeping an Open Mind,” “The shot heard ’round the world in this quieter revolution was the source release of Netscape’s – Mozilla’ browser. . . . This brought to widespread press and public attention a face-off between two dramatically different and fundamentally opposed styles of software development.” However, Raymond and other members of the free-software movement quickly realized that the term “free software” would need to be changed. Not only did it mistakenly give the impression that what they were talking about was “free of charge” instead of “free to use,” it also, Raymond wrote, made “a lot of corporate types nervous.” The solution was to change the terminology, and they settled on “open source.”
The official definition of “open source” as set out by the Open Source Software Institute is “collaboratively developed software created by corporations, academic institutions and individuals.” This definition and the basic ideals of the open-source movement do not differ significantly from those of “free software”: Both are based on the idea that the core code should be public and open to anyone who wants to use or modify it. The differences between them are in some ways a matter of semantics, but also relate to the level of flexibility each camp shows in terms of licensing. Open source recognizes a broader set of licensing schemes. Additionally, some members have focused on developing proprietary “add-ons” to the open software – which was one of the reasons Stallman never came to support open source.
iv. proprietary vs. open source: faster, better, cheaper
As more and more developers from around the world contributed to Torvalds’s project, Linux eventually became better and better, until it could compete with other existing operating systems. In fact, as Raymond tells us in Open Sources: Voices from the Open Source Revolution, by the late ’90s, Linux development had made the transition from something that hackers worked on for their own needs, to something that major computer companies were touting for its breakthrough innovations.
Nonetheless, proprietary software continues to hold its own in many key areas, including the global personal desktop market, where it enjoys a 94 percent share. But it is exactly the fact that open-source and proprietary software have distinct benefits that makes them both
so useful to innovation and growth.
Price
Linux is significantly cheaper to purchase than proprietary software. First, you can download a copy of Linux for free. Nothing beats that price. Second, Linux-based commercial products usually cost about 10 times less than their Windows counterparts. The average Linux product sells for $40 to $80 per copy, and can be installed on multiple computers. Windows XP, by contrast, costs $400-$500 per copy, which can only be loaded on one computer at a time.
In response, Microsoft has begun experimenting with price cuts in certain markets. In Thailand, for example, in order to participate in the government’s program to sell affordable computers to its citizens, Microsoft reduced the price of Windows and Office to about $37 apiece.
Security
There is much controversy over whether open-source or proprietary products are more secure. On the one hand, hackers (in the modern sense) may have an easier time exploiting open-source software because the code is open, making it easier to find holes. Also, the policy of open-source developers to release early and often means that in initial versions, open-source software is likelier to have more holes than proprietary software that has been extensively tested before it is sold to the public. In October 2003, Microsoft CEO Steve Ballmer argued this had proven to be true in practice: “In the first 150 days of Windows 2000, we had 17 critical vulnerabilities. The first day of Windows 2003, we had four critical vulnerabilities.” “The first 150 days of Red Hat 6,” he continued, referring to a commercial version of Linux, revealed levels of vulnerabilities “five to ten times higher.”
One further downside to open source is in the security area: If there is a breach, there may be no one to turn to for a solution. There is not necessarily a vendor to rely on once security problems occur. Therefore, breaches may end up being more expensive because users have to provide their own support.
Open-source proponents, however, argue that when there is a problem, Linux developers are much more likely to solve it before it is exploited, since programmers are themselves allowed to see the source code and make changes to patch up holes. Which is nice to know, if you happen to be a programmer or have contacts in the programming community.
Adaptability, Interoperability, and Support
One of the greatest benefits of open source is that the availability of the source code makes it easier to develop interoperable programs, and it also allows users to exercise independent choice in hiring and utilizing support services. With proprietary software, users are typically restricted to contracting support from the original supplier. Additionally, by not being able to see the code, their own programmers are not able to provide additional security or to adapt systems to the organization’s particular needs. In the current environment, where security concerns are increasing and where organizations are relying more and more on computer networks and systems, this kind of flexibility has become highly desirable.
In recognition of such customer needs, Microsoft has in recent months agreed to share the source code to Windows with government and some corporate customers. Thus far, however, organizations with fewer than 1,500 Windows desktops have been left out of this offer.
On the other side, individual software companies that produce proprietary software are often able to provide better and cheaper support due to their familiarity with the product. In almost all cases, these companies have solid track records in the area of support. They typically seek to improve customer relations, particularly through better security and privacy. Finding outside support with the same experience and incentives could be difficult, especially for small and medium-sized businesses.
Innovation
Perhaps the biggest debate between the proprietary and open-source communities is over the issue of which method most promotes innovation. Open-source products are said to encourage innovation because developers can use the code to create products that are more efficient and stable. But this does not mean they make any money off of it. As JAVA creator James Gosling put it in Red Herring magazine, “How can you be a professional engineer and still make your mortgage payments?” And lack of financial incentives might just mean there is less motivation to go out there and find the next big thing.
Many argue these financial incentives make innovation through proprietary software companies more likely, since in this environment, innovation equals financial gain. But, as the Business Software Alliance’s “Principles for Software Innovation” lay out, the best environment for innovation is probably the one in which multiple software development business and licensing models will be able to compete on their merits. Each model will appeal to different customers, and the existence of both will form a “healthy, diverse software marketplace.”
v. future is balance
The IBM Linux commercial says “sharing data is the first step towards community.” For many people, especially those who work on open-source software, this debate is not just about which model will be the best solution for the end user. It is the central battle in an ideological war. On one side are those who see the Internet as a great cooperative forum where people should also be free to do pretty much what they want. On the other side are those who see the Internet as both a new arena to make money in and as a challenge to the way they have traditionally done business. Thus do they seek to implement control over it, primarily in the name of intellectual property rights.
Of course, in wars there has to be a winner. One side prevails. But in this case, the sides can clearly coexist, working against each other, but also making each other better. Both the open-source and the proprietary software models have real benefits to bring to the table, and because no one solution can do everything for all customers, an environment where each acts as a check on the other is undoubtedly going to produce the best results in terms of innovation and growth.
Unfortunately, some governments are seeking to intervene in a potentially discriminatory manner by preventing procurement of information technology products that are not open source. It is easy to understand why – particularly in developing countries – governments would feel motivated to protect themselves and facilitate their own software industries by passing laws that they see as protections against monopoly technologies. But this kind of intervention is not the only tool they have at their disposal. They can rely on other business-friendly mechanisms to encourage competition in order to accomplish the same goals. There are dangerous potential side effects to governments picking the winner, including the simple yet important possibility that such decisions could lead to systems that are less effective and less secure.
The level of competition or balance that has been achieved among these competing models of software development could be extended to the larger intellectual property debate. Protecting intellectual property rights is important. Without some control over what they produce, artists, writers, and software developers would not be as motivated to create art, write books, or develop software. Even the proponents of open source enjoy their ability to license their works and set the rules that they feel are appropriate for what they have created.
But some content providers, including some proprietary software license holders, are seeking to convince governments around the world to implement tools that would have serious implications for the future of the Internet. Proposals that would make copyright protection systems mandatory in all digital devices have the potential to limit the benefits that can be derived from the Internet. One of the ways they can do this is by making it so difficult for the average person to enjoy content online that they would have to give it up altogether. As Michael Powell, chairman of the U.S. Federal Communications Commission, said in late 2001, intellectual property policy may already be having this effect on the use of broadband technologies in the United States.
A balance between free ideals and controls and protections is possible to achieve, but it is much more likely that it will be done by using open source as a model, than it will through government intervention. As Lawrence Lessig, the founder of Creative Commons – an organization seeking to restore balance in intellectual property rights – argues, the only way to accomplish a system of intellectual property rights in the digital environment is by “limit[ing] the government’s role in choosing the future of creativity.”
The open-source movement (helped along by their proprietary competitors), independent of government intervention, has produced a system that is beginning to achieve a balance between open and proprietary models. A similar equilibrium can be achieved in the larger realm of intellectual property. If the Internet is to be a positive force for business and technological innovation, the real solution to these problems is for governments to play a minimal role in this debate, and to let the marketplace largely determine the framework of rules for the Internet in the years to come.
Arrow Augerot lives in Washington, D.C.