Studies / Essays
Halloween I: Microsoft's Internal Memo On Open Source Software
In 1998, open source software was starting to gain a significant amount of traction and Microsoft was scared. An internal memo was sent in August of 1998 to key people of Microsoft educating them about open source software and the potential threats to their business model.
The memo eventually leaked and was made public. Microsoft confirmed the the memo is authentic.
The memo (dubbed the 'Halloween Document') is a fascinating look into Microsoft's early thoughts on open source software and how the company decided to define their strategy and focus.
We've republished the Hallowween Document below. Enjoy.
Vinod Valloppillil (VinodV)
Aug 11, 1998 -- v1.00
Open Source Software
A (New?) Development Methodology
Open Source Software (OSS) is a development process which promotes rapid creation and deployment of incremental features and bug fixes in an existing code / knowledge base. In recent years, corresponding to the growth of Internet, OSS projects have acquired the depth & complexity traditionally associated with commercial projects such as Operating Systems and mission critical servers.
Consequently,OSS poses a direct, short-term revenue and platform threat to Microsoft -- particularly in server space. Additionally, the intrinsic parallelism and free idea exchange in OSS has benefits that are not replicable with our current licensing model and therefore present a long term developer mindshare threat.
However, other OSS process weaknesses provide an avenue for Microsoft to garner advantage in key feature areas such as architectural improvements (e.g. storage+), integration (e.g. schemas), ease-of-use, and organizational support.
Open Source Software
What is it?
Open Source Software (OSS) is software in which both source and binaries are distributed or accessible for a given product, usually for free. OSS is often mistaken for "shareware" or "freeware" but there are significant differences between these licensing models and the process around each product.
Software Licensing Taxonomy
|Trial Software|| X
X -(Unenforced licensing)
Royalty-free binaries ("Freeware")
Open Source (BSD-Style)
Open Source (Apache Style)
Open Source (Linux/GNU style)
|License Feature||Zero Price Avenue||Redistributable||Unlimited Usage||Source Code Available||Source Code Modifiable||Public "Check-ins" to core codebase||All derivatives must be free|
The broad categories of licensing include:
- Commercial software
Commercial software is classic Microsoft bread-and-butter. It must be purchased, may NOT be redistributed, and is typically only available as binaries to end users.
- Limited trial software
Limited trial software are usually functionally limited versions of commercial software which are freely distributed and intend to drive purchase of the commercial code. Examples include 60-day time bombed evaluation products.
Shareware products are fully functional and freely redistributable but have a license that mandates eventual purchase by both individuals and corporations. Many internet utilities (like "WinZip") take advantage of shareware as a distribution method.
- Non-commercial use
Non-commercial use software is freely available and redistributable by non-profit making entities. Corporations, etc. must purchase the product. An example of this would be Netscape Navigator.
- Royalty free binaries
Royalty-free binaries consist of software which may be freely used and distributed in binary form only. Internet Explorer and NetMeeting binaries fit this model.
- Royalty free libraries
Royalty-free libraries are software products whose binaries and source code are freely used and distributed but may NOT be modified by the end customer without violating the license. Examples of this include class libraries, header files, etc.
- Open Source (BSD-style)
A small, closed team of developers develops BSD-style open source products & allows free use and redistribution of binaries and code. While users are allowed to modify the code, the development team does NOT typically take "check-ins" from the public.
- Open Source (Apache-style)
Apache takes the BSD-style open source model and extends it by allowing check-ins to the core codebase by external parties.
- Open Source (CopyLeft, Linux-style)
CopyLeft or GPL (General Public License) based software takes the Open Source license one critical step farther. Whereas BSD and Apache style software permits users to "fork" the codebase and apply their own license terms to their modified code (e.g. make it commercial), the GPL license requires that all derivative works in turn must also be GPL code. "You are free to hack this code as long as your derivative is also hackable"
Open Source Software is Significant to Microsoft
This paper focuses on Open Source Software (OSS). OSS is acutely different from the other forms of licensing (in particular "shareware") in two very important respects:
- There always exists an avenue for completely royalty-free purchase of the core code base
- Unlike freely distributed binaries, Open Source encourages a process around a core code base and encourages extensions to the codebase by other developers.
OSS is a concern to Microsoft for several reasons:
- OSS projects have achieved "commercial quality"
A key barrier to entry for OSS in many customer environments has been its perceived lack of quality. OSS advocates contend that the greater code inspection & debugging in OSS software results in higher quality code than commercial software.
Recent case studies (the Internet) provide very dramatic evidence in customer's eyes that commercial quality can be achieved / exceeded by OSS projects. At this time, however there is no strong evidence of OSS code quality aside from anecdotal.
- OSS projects have become large-scale & complex
Another barrier to entry that has been tackled by OSS is project complexity. OSS teams are undertaking projects whose size & complexity had heretofore been the exclusive domain of commercial, economically-organized/motivated development teams. Examples include the Linux Operating System and Xfree86 GUI.
OSS process vitality is directly tied to the Internet to provide distributed development resources on a mammoth scale. Some examples of OSS project size:
|Project||Lines of Code|
|Linux Kernel (x86 only)||500,000|
|Apache Web Server||80,000|
|Xfree86 X-windows server||1.5 Million|
|"K" desktop environment||90,000|
|Full Linux distribution||~10 Million|
- OSS has a unique development process with unique strengths/weaknesses
The OSS process is unique in its participants' motivations and the resources that can be brought to bare down on problems. OSS, therefore, has some interesting, non-replicable assets which should be thoroughly understood.
Open source software has roots in the hobbyist and the scientific community and was typified by ad hoc exchange of source code by developers/users.
The largest case study of OSS is the Internet. Most of the earliest code on the Internet was, and is still based on OSS as described in an interview with Tim O'Reilly ( http://www.techweb.com/internet/profile/toreilly/interview ):
TIM O'REILLY: The biggest message that we started out with was, "open source software works." ... BIND has absolutely dominant market share as the single most mission-critical piece of software on the Internet. Apache is the dominant Web server. SendMail runs probably eighty percent of the mail servers and probably touches every single piece of e-mail on the Internet
Free Software Foundation / GNU Project
Credit for the first instance of modern, organized OSS is generally given to Richard Stallman of MIT. In late 1983, Stallman created the Free Software Foundation (FSF) -- http://www.gnu.ai.mit.edu/fsf/fsf.html -- with the goal of creating a free version of the UNIX operating system. The FSF released a series of sources and binaries under the GNU moniker (which recursively stands for "Gnu's Not Unix").
The original FSF / GNU initiatives fell short of their original goal of creating a completely OSS Unix. They did, however, contribute several famous and widely disseminated applications and programming tools used today including:
- GNU Emacs - originally a powerful character-mode text editor, over time Emacs was enhanced to provide a front-end to compilers, mail readers, etc.
- strong>GNU C Compiler (GCC) -- GCC is the most widely used compiler in academia & the OSS world. In addition to the compiler a fairly standardized set of intermediate libraries are available as a superset to the ANSI C libraries.
- GNU GhostScript -- Postscript printer/viewer.
FSF/GNU software introduced the "copyleft" licensing scheme that not only made it illegal to hide source code from GNU software but also made it illegal to hide the source from work derived from GNU software. The document that described this license is known as the General Public License (GPL).
Wired magazine has the following summary of this scheme & its intent ( http://www.wired.com/wired/5.08/linux.html):
The general public license, or GPL, allows users to sell, copy, and change copylefted programs - which can also be copyrighted - but you must pass along the same freedom to sell or copy your modifications and change them further. You must also make the source code of your modifications freely available.
The second clause -- open source code of derivative works -- has been the most controversial (and, potentially the most successful) aspect of CopyLeft licensing.
Open Source Process
Commercial software development processes are hallmarked by organization around economic goals. However, since money is often not the (primary) motivation behind Open Source Software, understanding the nature of the threat posed requires a deep understanding of the process and motivation of Open Source development teams.
In other words, to understand how to compete against OSS, we must target a process rather than a company.
Open Source Development Teams
Some of the key attributes of Internet-driven OSS teams:
- Geographically far-flung. Some of the key developers of Linux, for example, are uniformly distributed across Europe, the US, and Asia.
- Large set of contributors with a smaller set of core individuals. Linux, once again, has had over 1000 people submit patches, bug fixes, etc. and has had over 200 individuals directly contribute code to the kernel.
- Not monetarily motivated (in the short run). These individuals are more like hobbyists spending their free time / energy on OSS project development while maintaining other full time jobs. This has begun to change somewhat as commercial versions of the Linux OS have appeared.
OSS Development Coordination
Communication -- Internet Scale
Coordination of an OSS team is extremely dependent on Internet-native forms of collaboration. Typical methods employed run the full gamut of the Internet's collaborative technologies:
- Email lists
- 24x 7 monitoring by international subscribers
- Web sites
OSS projects the size of Linux and Apache are only viable if a large enough community of highly skilled developers can be amassed to attack a problem. Consequently, there is direct correlation between the size of the project that OSS can tackle and the growth of the Internet.
In addition to the communications medium, another set of factors implicitly coordinate the direction of the team.
Common goals are the equivalent of vision statements which permeate the distributed decision making for the entire development team. A single, clear directive (e.g. "recreate UNIX") is far more efficiently communicated and acted upon by a group than multiple, intangible ones (e.g. "make a good operating system").
Precedence is potentially the most important factor in explaining the rapid and cohesive growth of massive OSS projects such as the Linux Operating System. Because the entire Linux community has years of shared experience dealing with many other forms of UNIX, they are easily able to discern -- in a non-confrontational manner -- what worked and what didn't.
There weren't arguments about the command syntax to use in the text editor -- everyone already used "vi" and the developers simply parcelled out chunks of the command namespace to develop.
Having historical, 20:20 hindsight provides a strong, implicit structure. In more forward looking organizations, this structure is provided by strong, visionary leadership.
NatBro points out that the need for a commonly accepted skillset as a pre-requisite for OSS development. This point is closely related to the common precedents phenomena. From his email:
A key attribute ... is the common UNIX/gnu/make skillset that OSS taps into and reinforces. I think the whole process wouldn't work if the barrier to entry were much higher than it is ... a modestly skilled UNIX programmer can grow into doing great things with Linux and many OSS products. Put another way -- it's not too hard for a developer in the OSS space to scratch their itch, because things build very similarly to one another, debug similarly, etc.
Whereas precedents identify the end goal, the common skillsets attribute describes the number of people who are versed in the process necessary to reach that end.
The Cathedral and the Bazaar
A very influential paper by an open source software advocate -- Eric Raymond -- was first published in May 1997 ( http://www.redhat.com/redhat/cathedral-bazaar/). Raymond's paper was expressly cited by (then) Netscape CTO Eric Hahn as a motivation for their decision to release browser source code.
Raymond dissected his OSS project in order to derive rules-of-thumb which could be exploited by other OSS projects in the future. Some of Raymond's rules include:
Every good work of software starts by scratching a developer's personal itch
This summarizes one of the core motivations of developers in the OSS process -- solving an immediate problem at hand faced by an individual developer -- this has allowed OSS to evolve complex projects without constant feedback from a marketing / support organization.
Good programmers know what to write. Great ones know what to rewrite (and reuse).
Raymond posits that developers are more likely to reuse code in a rigorous open source process than in a more traditional development environment because they are always guaranteed access to the entire source all the time.
Widely available open source reduces search costs for finding a particular code snippet.
``Plan to throw one away; you will, anyhow.''
Quoting Fred Brooks, ``The Mythical Man-Month'', Chapter 11. Because development teams in OSS are often extremely far flung, many major subcomponents in Linux had several initial prototypes followed by the selection and refinement of a single design by Linus.
Treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging.
Raymond advocates strong documentation and significant developer support for OSS projects in order to maximize their benefits.
Code documentation is cited as an area which commercial developers typically neglect which would be a fatal mistake in OSS.
Release early. Release often. And listen to your customers.
OSS advocates will note, however, that their release-feedback cycle is potentially an order of magnitude faster than commercial software's.
Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
This is probably the heart of Raymond's insight into the OSS process. He paraphrased this rule as "debugging is parallelizable". More in depth analysis follows.
Once a component framework has been established (e.g. key API's & structures defined), OSS projects such as Linux utilize multiple small teams of individuals independently solving particular problems.
Because the developers are typically hobbyists, the ability to `fund' multiple, competing efforts is not an issue and the OSS process benefits from the ability to pick the best potential implementation out of the many produced.
Note, that this is very dependent on:
- A large group of individuals willing to submit code
- A strong, implicit componentization framework (which, in the case of Linux was inherited from UNIX architecture).
The core argument advanced by Eric Raymond is that unlike other aspects of software development, code debugging is an activity whose efficiency improves nearly linearly with the number of individuals tasked with the project. There are little/no management or coordination costs associated with debugging a piece of open source code -- this is the key `break' in Brooks' laws for OSS.
Raymond includes Linus Torvald's description of the Linux debugging process:
My original formulation was that every problem ``will be transparent to somebody''. Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. ``Somebody finds the problem,'' he says, ``and somebody else understands it. And I'll go on record as saying that finding it is the bigger challenge.'' But the point is that both things tend to happen quickly
``Debugging is parallelizable''. Jeff [Dutky <email@example.com>] observes that although debugging requires debuggers to communicate with some coordinating developer, it doesn't require significant coordination between debuggers. Thus it doesn't fall prey to the same quadratic complexity and management costs that make adding developers problematic.
One advantage of parallel debugging is that bugs and their fixes are found / propagated much faster than in traditional processes. For example, when the TearDrop IP attack was first posted to the web, less than 24 hours passed before the Linux community had a working fix available for download.
An extension to parallel debugging that I'll add to Raymond's hypothesis is "impulsive debugging". In the case of the Linux OS, implicit to the act of installing the OS is the act of installing the debugging/development environment. Consequently, it's highly likely that if a particular user/developer comes across a bug in another individual's component -- and especially if that bug is "shallow" -- that user can very quickly patch the code and, via internet collaboration technologies, propagate that patch very quickly back to the code maintainer.
Put another way, OSS processes have a very low entry barrier to the debugging process due to the common development/debugging methodology derived from the GNU tools.
Any large scale development process will encounter conflicts which must be resolved. Often resolution is an arbitrary decision in order to further progress the project. In commercial teams, the corporate hierarchy + performance review structure solves this problem -- How do OSS teams resolve them?
In the case of Linux, Linus Torvalds is the undisputed `leader' of the project. He's delegated large components (e.g. networking, device drivers, etc.) to several of his trusted "lieutenants" who further de-facto delegate to a handful of "area" owners (e.g. LAN drivers).
Other organizations are described by Eric Raymond: (http://earthspace.net/~esr/writings/homesteading/homesteading-15.html):
Some very large projects discard the `benevolent dictator' model entirely. One way to do this is turn the co-developers into a voting committee (as with Apache). Another is rotating dictatorship, in which control is occasionally passed from one member to another within a circle of senior co-developers (the Perl developers organize themselves this way).
This section provides an overview of some of the key reasons OSS developers seek to contribute to OSS projects.
Solving the Problem at Hand
This is basically a rephrasing of Raymond's first rule of thumb -- "Every good work of software starts by scratching a developer's personal itch".
Many OSS projects -- such as Apache -- started as a small team of developers setting out to solve an immediate problem at hand. Subsequent improvements of the code often stem from individuals applying the code to their own scenarios (e.g. discovering that there is no device driver for a particular NIC, etc.)
The Linux kernel grew out of an educational project at the University of Helsinki. Similarly, many of the components of Linux / GNU system (X windows GUI, shell utilities, clustering, networking, etc.) were extended by individuals at educational institutions.
- In the Far East, for example, Linux is reportedly growing faster than internet connectivity -- due primarily to educational adoption.
- Universities are some of the original proponents of OSS as a teaching tool.
- Research/teaching projects on top of Linux are easily `disseminated' due to the wide availability of Linux source. In particular, this often means that new research ideas are first implemented and available on Linux before they are available / incorporated into other platforms.
The most ethereal, and perhaps most profound motivation presented by the OSS development community is pure ego gratification.
In "The Cathedral and the Bazaar", Eric S. Raymond cites:
The ``utility function'' Linux hackers are maximizing is not classically economic, but is the intangible of their own ego satisfaction and reputation among other hackers.
And, of course, "you aren't a hacker until someone else calls you hacker"
Homesteading on the Noosphere
A second paper published by Raymond -- "Homesteading on the Noosphere" ( http://sagan.earthspace.net/~esr/writings/homesteading/ ), discusses the difference between economically motivated exchange (e.g. commercial software development for money) and "gift exchange" (e.g. OSS for glory).
"Homesteading" is acquiring property by being the first to `discover' it or by being the most recent to make a significant contribution to it. The "Noosphere" is loosely defined as the "space of all work". Therefore, Raymond posits, the OSS hacker motivation is to lay a claim to the largest area in the body of work. In other words, take credit for the biggest piece of the prize.
From "Homesteading on the Noosphere":
Abundance makes command relationships difficult to sustain and exchange relationships an almost pointless game. In gift cultures, social status is determined not by what you control but by what you give away.
For examined in this way, it is quite clear that the society of open-source hackers is in fact a gift culture. Within it, there is no serious shortage of the `survival necessities' -- disk space, network bandwidth, computing power. Software is freely shared. This abundance creates a situation in which the only available measure of competitive success is reputation among one's peers.
More succinctly ( http://www.techweb.com/internet/profile/eraymond/interview ):
SIMS: So the scarcity that you looked for was the scarcity of attention and reward?
RAYMOND: That's exactly correct.
This is a controversial motivation and I'm inclined to believe that at some level, Altruism `degenerates' into a form of the Ego Gratification argument advanced by Raymond.
One smaller motivation which, in part, stems from altruism is Microsoft-bashing.
A key threat in any large development team -- and one that is particularly exacerbated by the process chaos of an internet-scale development team -- is the risk of code-forking.
Code forking occurs when over normal push-and-pull of a development project, multiple, inconsistent versions of the project's code base evolve.
In the commercial world, for example, the strong, singular management of the Windows NT codebase is considered to be one of it's greatest advantages over the `forked' codebase found in commercial UNIX implementations (SCO, Solaris, IRIX, HP-UX, etc.).
Forking in OSS -- BSD Unix
Within OSS space, BSD Unix is the best example of forked code. The original BSD UNIX was an attempt by U-Cal Berkeley to create a royalty-free version of the UNIX operating system for teaching purposes. However, Berkeley put severe restrictions on non-academic uses of the codebase.
In order to create a fully free version of BSD UNIX, an ad hoc (but closed) team of developers created FreeBSD. Other developers at odds with the FreeBSD team for one reason or another splintered the OS to create other variations (OpenBSD, NetBSD, BSDI).
There are two dominant factors which led to the forking of the BSD tree:
- Not everyone can contribute to the BSD codebase. This limits the size of the effective "Noosphere" and creates the potential for someone else to credibly claim that their forked code will become more dominant than the core BSD code.
- Unlike GPL, BSD's license places no restrictions on derivative code. Therefore, if you think your modifications are cool enough, you are free to fork the code, charge money for it, change its name, etc.
Both of these motivations create a situation where developers may try to force a fork in the code and collect royalties (monetary, or ego) at the expense of the collective BSD society.
(Lack of) Forking in Linux
In contrast to the BSD example, the Linux kernel code base hasn't forked. Some of the reasons why the integrity of the Linux codebase has been maintained include:
- Universally accepted leadership
Linus Torvalds is a celebrity in the Linux world and his decisions are considered final. By contrast, a similar celebrity leader did NOT exist for the BSD-derived efforts.
Linus is considered by the development team to be a fair, well-reasoned code manager and his reputation within the Linux community is quite strong. However, Linus doesn't get involved in every decision. Often, sub groups resolve their -- often large -- differences amongst themselves and prevent code forking.
- Open membership & long term contribution potential.
In contrast to BSD's closed membership, anyone can contribute to Linux and your "status" -- and therefore ability to `homestead' a bigger piece of Linux -- is based on the size of your previous contributions.
Indirectly this presents a further disincentive to code forking. There is almost no credible mechanism by which the forked, minority code base will be able to maintain the rate of innovation of the primary Linux codebase.
- GPL licensing eliminates economic motivations for code forking
Because derivatives of Linux MUST be available through some free avenue, it lowers the long term economic gain for a minority party with a forked Linux tree.
- Forking the codebase also forks the "Noosphere"
Ego motivations push OSS developers to plant the biggest stake in the biggest Noosphere. Forking the code base inevitably shrinks the space of accomplishment for any subsequent developers to the new code tree.
Open Source Strengths
What are the core strengths of OSS products that Microsoft needs to be concerned with?
OSS Exponential Attributes
Like our Operating System business, OSS ecosystems have several exponential attributes:
- OSS processes are growing with the Internet
The single biggest constraint faced by any OSS project is finding enough developers interested in contributing their time towards the project. As an enabler, the Internet was absolutely necessary to bring together enough people for an Operating System scale project. More importantly, the growth engine for these projects is the growth in the Internet's reach. Improvements in collaboration technologies directly lubricate the OSS engine.
Put another way, the growth of the Internet will make existing OSS projects bigger and will make OSS projects in "smaller" software categories become viable.
- OSS processes are "winner-take-all"
Like commercial software, the most viable single OSS project in many categories will, in the long run, kill competitive OSS projects and `acquire' their IQ assets. For example, Linux is killing BSD Unix and has absorbed most of its core ideas (as well as ideas in the commercial UNIXes). This feature confers huge first mover advantages to a particular project
- Developers seek to contribute to the largest OSS platform
The larger the OSS project, the greater the prestige associated with contributing a large, high quality component to its Noosphere. This phenomena contributes back to the "winner-take-all" nature of the OSS process in a given segment.
- Larger OSS projects solve more "problems at hand"
The larger the project, the more development/test/debugging the code receives. The more debugging, the more people who deploy it.
Binaries may die but source code lives forever
One of the most interesting implications of viable OSS ecosystems is long-term credibility.
Long-Term Credibility Defined
Long term credibility exists if there is no way you can be driven out of business in the near term. This forces change in how competitors deal with you.
For example, Airbus Industries garnered initial long term credibility from explicit government support. Consequently, when bidding for an airline contract, Boeing would be more likely to accept short-term, non-economic returns when bidding against Lockheed than when bidding against Airbus.
OSS is Long-Term Credible
OSS systems are considered credible because the source code is available from potentially millions of places and individuals.
The likelihood that Apache will cease to exist is orders of magnitudes lower than the likelihood that WordPerfect, for example, will disappear. The disappearance of Apache is not tied to the disappearance of binaries (which are affected by purchasing shifts, etc.) but rather to the disappearance of source code and the knowledge base.
Inversely stated, customers know that Apache will be around 5 years from now -- provided there exists some minimal sustained interested from its user/development community.
One Apache customer, in discussing his rationale for running his e-commerce site on OSS stated, "because it's open source, I can assign one or two developers to it and maintain it myself indefinitely. "
Lack of Code-Forking Compounds Long-Term Credibility
The GPL and its aversion to code forking reassures customers that they aren't riding an evolutionary `dead-end' by subscribing to a particular commercial version of Linux.
The "evolutionary dead-end" is the core of the software FUD argument.
Linux and other OSS advocates are making a progressively more credible argument that OSS software is at least as robust -- if not more -- than commercial alternatives. The Internet provides an ideal, high-visibility showcase for the OSS world.
In particular, larger, more savvy, organizations who rely on OSS for business operations (e.g. ISPs) are comforted by the fact that they can potentially fix a work-stopping bug independent of a commercial provider's schedule (for example, UUNET was able to obtain, compile, and apply the teardrop attack patch to their deployed Linux boxes within 24 hours of the first public attack)
Alternatively stated, "developer resources are essentially free in OSS". Because the pool of potential developers is massive, it is economically viable to simultaneously investigate multiple solutions / versions to a problem and chose the best solution in the end.
For example, the Linux TCP/IP stack was probably rewritten 3 times. Assembly code components in particular have been continuously hand tuned and refined.
OSS = `perfect' API evangelization / documentation
OSS's API evangelization / developer education is basically providing the developer with the underlying code. Whereas evangelization of API's in a closed source model basically defaults to trust, OSS API evangelization lets the developer make up his own mind.
Strongly componentized OSS projects are able to release subcomponents as soon as the developer has finished his code. Consequently, OSS projects rev quickly & frequently.
Open Source Weaknesses
The weaknesses in OSS projects fall into 3 primary buckets:
- Management costs
- Process Issues
- Organizational Credibility
The biggest roadblock for OSS projects is dealing with exponential growth of management costs as a project is scaled up in terms of rate of innovation and size. This implies a limit to the rate at which an OSS project can innovate.
Starting an OSS project is difficult
From Eric Raymond:
It's fairly clear that one cannot code from the ground up in bazaar style. One can test, debug and improve in bazaar style, but it would be very hard to originate a project in bazaar mode. Linus didn't try it. I didn't either. Your nascent developer community needs to have something runnable and testable to play with.
Raymond `s argument can be extended to the difficulty in starting/sustaining a project if there are no clear precedent / goal (or too many goals) for the project.
Obviously, there are far more fragments of source code on the Internet than there are OSS communities. What separates "dead source code" from a thriving bazaar?
One article (http://www.mibsoftware.com/bazdev/0003.htm) provides the following credibility criteria:
"....thinking in terms of a hard minimum number of participants is misleading. Fetchmail and Linux have huge numbers of beta testers *now*, but they obviously both had very few at the beginning.
What both projects did have was a handful of enthusiasts and a plausible promise. The promise was partly technical (this code will be wonderful with a little effort) and sociological (if you join our gang, you'll have as much fun as we're having). So what's necessary for a bazaar to develop is that it be credible that the full-blown bazaar will exist!"
I'll posit that some of the key criteria that must exist for a bazaar to be credible include:
- Large Future Noosphere -- The project must be cool enough that the intellectual reward adequately compensates for the time invested by developers. The Linux OS excels in this respect.
- Scratch a big itch -- The project must be important / deployable by a large audience of developers. The Apache web server provides an excellent example here.
- Solve the right amount of the problem first -- Solving too much of the problem relegates the OSS development community to the role of testers. Solving too little before going OSS reduces "plausible promise" and doesn't provide a strong enough component framework to efficiently coordinate work.
When describing this problem to JimAll, he provided the perfect analogy of "chasing tail lights". The easiest way to get coordinated behavior from a large, semi-organized mob is to point them at a known target. Having the taillights provides concreteness to a fuzzy vision. In such situations, having a taillight to follow is a proxy for having strong central leadership.
Of course, once this implicit organizing principle is no longer available (once a project has achieved "parity" with the state-of-the-art), the level of management necessary to push towards new frontiers becomes massive.
This is possibly the single most interesting hurdle to face the Linux community now that they've achieved parity with the state of the art in UNIX in many respects.
Another interesting thing to observe in the near future of OSS is how well the team is able to tackle the "unsexy" work necessary to bring a commercial grade product to life.
In the operating systems space, this includes small, essential functions such as power management, suspend/resume, management infrastructure, UI niceties, deep Unicode support, etc.
For Apache, this may mean novice-administrator functionality such as wizards.
Integrative work across modules is the biggest cost encountered by OSS teams. An email memo from Nathan Myrhvold on 5/98, points out that of all the aspects of software development, integration work is most subject to Brooks' laws.
Up till now, Linux has greatly benefited from the integration / componentization model pushed by previous UNIX's. Additionally, the organization of Apache was simplified by the relatively simple, fault tolerant specifications of the HTTP protocol and UNIX server application design.
Future innovations which require changes to the core architecture / integration model are going to be incredibly hard for the OSS team to absorb because it simultaneously devalues their precedents and skillsets.
These are weaknesses intrinsic to OSS's design/feedback methodology.
One of the key's to the OSS process is having many more iterations than commercial software (Linux was known to rev it's kernel more than once a day!). However, commercial customers tell us they want fewer revs, not more.
The Linux OS is not developed for end users but rather, for other hackers. Similarly, the Apache web server is implicitly targetted at the largest, most savvy site operators, not the departmental intranet server.
The key thread here is that because OSS doesn't have an explicit marketing / customer feedback component, wishlists -- and consequently feature development -- are dominated by the most technically savvy users.
One thing that development groups at MSFT have learned time and time again is that ease of use, UI intuitiveness, etc. must be built from the ground up into a product and can not be pasted on at a later time.
The interesting trend to observe here will be the effect that commercial OSS providers (such as RedHat in Linux space, C2Net in Apache space) will have on the feedback cycle.
How can OSS provide the service that consumers expect from software providers?
Product support is typically the first issue prospective consumers of OSS packages worry about and is the primary feature that commercial redistributors tout.
However, the vast majority of OSS projects are supported by the developers of the respective components. Scaling this support infrastructure to the level expected in commercial products will be a significant challenge. There are many orders of magnitude difference between users and developers in IIS vs. Apache.
For the short-medium run, this factor alone will relegate OSS products to the top tiers of the user community.
A very sublime problem which will affect full scale consumer adoption of OSS projects is the lack of strategic direction in the OSS development cycle. While incremental improvement of the current bag of features in an OSS product is very credible, future features have no organizational commitment to guarantee their development.
What does it mean for the Linux community to "sign up" to help build the Corporate Digital Nervous System? How can Linux guarantee backward compatibility with apps written to previous API's? Who do you sue if the next version of Linux breaks some commitment? How does Linux make a strategic alliance with some other entity?
Open Source Business Models
In the last 2 years, OSS has taken another twist with the emergence of companies that sell OSS software, and more importantly, hiring full-time developers to improve the code base. What's the business model that justifies these salaries?
In many cases, the answers to these questions are similar to "why should I submit my protocol/app/API to a standards body?"
The vendor of OSS-ware provides sales, support, and integration to the customer. Effectively, this transforms the OSS-ware vendor from a package goods manufacturer into a services provider.
Loss Leader -- Market Entry
The Loss Leader OSS business model can be used for two purposes:
- Jumpstarting an infant market
- Breaking into an existing market with entrenched, closed-source players
Many OSS startups -- particularly those in Operating Systems space -- view funding the development of OSS products as a strategic loss leader against Microsoft.
Linux distributors, such as RedHat, Caldera, and others, are expressly willing to fund full time developers who release all their work to the OSS community. By simultaneously funding these efforts, Red Hat and Caldera are implicitly colluding and believe they'll make more short term revenue by growing the Linux market rather than directly competing with each other.
An indirect example is O'Reilly & Associates employment of Larry Wall -- "leader" and full time developer of PERL. The #1 publisher of PERL reference books, of course is O'Reilly & Associates.
For the short run, especially as the OSS project is at the steepest part of it's growth curve, such investments generate positive ROI. Longer term, ROI motivations may steer these developers towards making proprietary extensions rather than releasing OSS.
Commoditizing Downstream Suppliers
This is very closely related to the loss leader business model. However, instead of trying to get marginal service returns by massively growing the market, these businesses increase returns in their part of the value chain by commoditizing downstream suppliers.
The best examples of this currently are the thin server vendors such as Whistle Communications, and Cobalt Micro who are actively funding developers in SAMBA and Linux respectively.
Both Whistle and Cobalt generate their revenue on hardware volume. Consequently, funding OSS enables them to avoid today's PC market where a "tax" must be paid to the OS vendor (NT Server retail price is $800 whereas Cobalt's target MSRP is around $1000).
The earliest Apache developers were employed by cash-strapped ISPs and ICPs.
Another, more recent example is IBM's deal with Apache. By declaring the HTTP server a commodity, IBM hopes to concentrate returns in the more technically arcane application services it bundles with it's Apache distribution (as well as hope to reach Apache's tremendous market share).
First Mover -- Build Now, $$ Later
One of the exponential qualities of OSS -- successful OSS projects swallow less successful ones in their space -- implies a pre-emption business model where by investing directly in OSS today, they can pre-empt / eliminate competitive projects later -- especially if the project requires API evangelization. This is tantamount to seizing a first mover advantage in OSS.
In addition, the developer scale, iteration rate, and reliability advantages of the OSS process are a blessing to small startups who typically can't afford a large in--house development staff.
Examples of startups in this space include SendMail.com (making a commercially supported version of the sendmail mail transfer agent) and C2Net (makes commercial and encrypted Apache)
Notice, that no case of a successful startup originating an OSS project has been observed. In both of these cases, the OSS project existed before the startup was formed.
Sun Microsystem's has recently announced that its "JINI" project will be provided via a form of OSS and may represent an application of the pre-emption doctrine.
The next several sections analyze the most prominent OSS projects including Linux, Apache, and now, Netscape's OSS browser.
A second memo titled "Linux OS Competitive Analysis" provides an in-depth review of the Linux OS. Here, I provide a top-level summary of my findings in Linux.
What is it?
Linux (pronounced "LYNN-ucks") is the #1 market share Open Source OS on the Internet. Linux is derives strongly from the 25+ years of lessons learned on the UNIX operating system.
- Multi-user / Multi-threaded (kernel & user)
- Multi-platform (x86, Alpha, MIPS, PowerPC, SPARC, etc.)
- Protected 32-bit memory space for apps; Virtual Memory support (64-bit in development)
- SMP (Intel & Sun CPU's)
- Supports multiple file systems (FAT16, FAT32, NTFS, Ext2FS)
- High performance networking
- NFS/SMB/IPX/Appletalk networking
- Fastest stack in Unix vs. Unix perf tests
- Disk Management
- Striping, mirroring, FAT16, FAT32, NTFS
- Xfree86 GUI
Linux is a real, credible OS + Development process
Like other Open Source Software (OSS) products, the real key to Linux isn't the static version of the product but rather the process around it. This process lends credibility and an air of future-safeness to customer Linux investments.
- Trusted in mission criticial environments . Linux has been deployed in mission critical, commercial environments with an excellent pool of public testimonials.
- Linux = Best of Breed UNIX. Linux outperforms many other UNIX's in most major performance category (networking, disk I/O, process ctx switch, etc.). To grow their featurebase, Linux has also liberally stolen features of other UNIX's (shell features, file systems, graphics, CPU ports)
- Only Unix OS to gain market share. Linux is on track to eventually own the x86 UNIX market and has been the only UNIX version to gain net Server OS market share in recent years. I believe that Linux -- moreso than NT -- will be the biggest threat to SCO in the near future.
- Linux's process iterates VERY fast. For example, the Linux equivalent of the TransmitFile() API went from idea to final implementation in about 2 weeks time.
Linux is a short/medium-term threat in servers
The primary threat Microsoft faces from Linux is against NT Server.
Linux's future strength against NT server (and other UNIXes) is fed by several key factors:
- Linux uses commodity PC hardware and, due to OS modularity, can be run on smaller systems than NT. Linux is frequently used for services such as DNS running on old 486's in back closets.
- Due to it's UNIX heritage, Linux represents a lower switching cost for some organizations than NT
- UNIX's perceived Scaleability, Interopability, Availability, and Manageability (SIAM) advantages over NT.
- Linux can win as long as services / protocols are commodities
Linux is unlikely to be a threat on the desktop
Linux is unlikely to be a threat in the medium-long term on the desktop for several reasons:
- Poor end-user apps & focus. OSS development process are far better at solving individual component issues than they are at solving integrative scenarios such as end-to-end ease of use.
- Switching costs for desktop installed base. Switching desktops is hard and a challenger must be able to prove a significant marginal advantage. Linux's process is more focused on second-mover advantages (e.g. copying what's been proven to work) and is therefore unlikely to provide the first-mover advantage necessary to provide switching impetus.
- UNIX heritage will slow encroachment. Ease of use must be engineered from the ground up. Linux's hacker orientation will never provide the ease-of-use requirements of the average desktop user.
In addition to the attacking the general weaknesses of OSS projects (e.g. Integrative / Architectural costs), some specific attacks on Linux are:
- Beat UNIX
All the standard product issues for NT vs. Sun apply to Linux.
- Fold extended functionality into commodity protocols / services and create new protocols
Linux's homebase is currently commodity network and server infrastructure. By folding extended functionality (e.g. Storage+ in file systems, DAV/POD for networking) into today's commodity services, we raise the bar & change the rules of the game.
In an attempt to renew it's credibility in the browser space, Netscape has recently released and is attempting to create an OSS community around it's Mozilla source code.
Organization & LIcensing
Netscape's organization and licensing model is loosely based on the Linux community & GPL with a few differences. First, Mozilla and Netscape Communicator are 2 codebases with Netscape's engineers providing synchronization.
- Mozilla = the OSS, freely distributable browser
- Netscape Communicator = Branded, slightly modified (e.g. homepage default is set to home.netscape.com) version of Mozilla.
Unlike the full GPL, Netscape reserves the final right to reject / force modifications into the Mozilla codebase and Netscape's engineers are the appointed "Area Directors" of large components (for now).
Capitalize on Anti-MSFT Sentiment in the OSS Community
Relative to other OSS projects, Mozilla is considered to be one of the most direct, near-term attacks on the Microsoft establishment. This factor alone is probably a key galvanizing factor in motivating developers towards the Mozilla codebase.
The availability of Mozilla source code has renewed Netscape's credibility in the browser space to a small degree. As BharatS points out in http://ie/specs/Mozilla/default.htm:
"They have guaranteed by releasing their code that they will never disappear from the horizon entirely in the manner that Wordstar has disappeared. Mozilla browsers will survive well into the next 10 years even if the user base does shrink. "
Scratch a big itch
The browser is widely used / disseminated. Consequently, the pool of people who may be willing to solve "an immediate problem at hand" and/or fix a bug may be quite high.
Post parity development
Mozilla is already at close to parity with IE4/5. Consequently, there no strong example to chase to help implicitly coordinate the development team.
Netscape has assigned some of their top developers towards the full time task of managing the Mozilla codebase and it will be interesting to see how this helps (if at all) the ability of Mozilla to push on new ground.
An interesting weakness is the size of the remaining "Noosphere" for the OSS browser.
- The stand-alone browser is basically finished.
There are no longer any large, high-profile segments of the stand-alone browser which must be developed. In otherwords, Netscape has already solved the interesting 80% of the problem. There is little / no ego gratification in debugging / fixing the remaining 20% of Netscape's code.
- Netscape's commercial interests shrink the effect of Noosphere contributions.
Linus Torvalds' management of the Linux codebase is arguably directed towards the goal of creating the best Linux. Netscape, by contrast, expressly reserves the right to make code management decisions on the basis of Netscape's commercial / business interests. Instead of creating an important product, the developer's code is being subjugated to Netscape's stock price.
Potentially the single biggest detriment to the Mozilla effort is the level of integration that customers expect from features in a browser. As stated earlier, integration development / testing is NOT a parallelizable activity and therefore is hurt by the OSS process.
In particular, much of the new work for IE5+ is not just integrating components within the browser but continuing integration within the OS. This will be exceptionally painful to compete aga inst.
The contention therefore, is that unlike the Apache and Linux projects which, for now, are quite successful, Netscape's Mozilla effort will:
- Produce the dominant browser on Linux and some UNIX's
- Continue to slip behind IE in the long run
Keeping in mind that the source code was only released a short time ago (April '98), there is already evidence of waning interest in Mozilla. EXTREMELY unscientific evidence is found in the decline in mailing list volume on Mozilla mailing lists from April to June.
Mozilla Mailing List
Internal mirrors of the Mozilla mailing lists can be found on http://egg.Microsoft.com/wilma/lists
Paraphrased from http://www.apache.org/ABOUT_APACHE.html
In February of 1995, the most popular server software on the Web was the public domain HTTP daemon developed by NCSA, University of Illinois, Urbana-Champaign. However, development of that httpd had stalled after mid-1994, and many webmasters had developed their own extensions and bug fixes that were in need of a common distribution. A small group of these webmasters, contacted via private e-mail, gathered together for the purpose of coordinating their changes (in the form of "patches"). By the end of February `95, eight core contributors formed the foundation of the original Apache Group. In April 1995, Apache 0.6.2 was released.
During May-June 1995, a new server architecture (code-named Shambhala) was developed which included a modular structure and API for better extensibility, pool-based memory allocation, and an adaptive pre-forking process model. The group switched to this new server base in July and added the features from 0.7.x, resulting in Apache 0.8.8 (and its brethren) in August.
Less than a year after the group was formed, the Apache server passed NCSA's httpd as the #1 server on the Internet.
The Apache development team consists of about 19 core members plus hundreds of web site administrators around the world who've submitted a bug report / patch of one form or another. Apache's bug data can be found at: http://bugs.apache.org/index.
A description of the code management and dispute resolution procedures followed by the Apache team are found on http://www.apache.org:
There is a core group of contributors (informally called the "core") which was formed from the project founders and is augmented from time to time when core members nominate outstanding contributors and the rest of the core members agree.
Changes to the code are proposed on the mailing list and usually voted on by active members -- three +1 (yes votes) and no -1 (no votes, or vetoes) are needed to commit a code change during a release cycle
Apache far and away has #1 web site share on the Internet today. Possession of the lion's share of the market provides extremely powerful control over the market's evolution.
In particular, Apache's market share in web server space presents the following competitive hurdles:
- Lowest common denominator HTTP protocol -- slows our ability to extend the protocol to support new applications
- Breathe more life into UNIX -- Where Apache goes, Unix must follow.
3rd Party Support
The number of tools / modules / plug-ins available for Apache has been growing at an increasing rate.
In the short run, IIS soundly beats Apache on SPECweb. Moving further, as IIS moves into kernel and takes advantage deeper integration with the NT, this lead is expected to increase further.
Apache, by contrast, is saddled with the requirement to create portable code for all of its OS environments.
HTTP Protocol Complexity & Application services
Part of the reason that Apache was able to get a foothold and take off was because the HTTP protocol is so simple. As more and more features become layered on top of the humble web server (e.g. multi-server transaction support, POD, etc.) it will be interesting to see how the Apache team will be able to keep up.
ASP support, for example is a key driver for IIS in corporate intranets.
IBM & Apache
Recently, IBM announced it's support for the Apache codebase in its WebSphere application server. The actual result of the press furor is still unclear however:
- IBM still ships and supports both Apache and Domino's GO web server
- IBM's commitment appears to be:
- Helping Apache port to strategic IBM platforms (AS/400, etc.)
- Redistributing Apache binaries to customers who request Apache support
- Support for Apache binaries (only if they were purchased through IBM?)
- IBM has developers actively participating in Apache development / discussion groups.
- IBM is taking a lead role in optimizing Apache for NT
Some other OSS projects:
- Gimp -- http://www.gimp.org -- Gimp (GNU Image Manipulation Program) is an OSS project to create an Adobe Photoshop clone for Unix workstations. Feature-wise, however, their version 1.0 project is more akin to PaintBrush.
- WINE / WABI -- http://www.wine.org -- Wine (Wine Is Not an Emulator) is an OSS windows emulation library for UNIX. Wine competes (somewhat) with Sun's WABI project which is non-OSS. Older versions of Office, for example, are able to run in WINE although performance remains to be evaluated.
- PERL -- http://www.perl.org -- PERL (Practical Evaluation and Reporting Language) is the defacto standard scripting language for all Apache web servers. PERL is very popular on UNIX in particular due to its powerful text/string manipulation and UNIX's reliance on command line administration of all functionality.
- BIND --http://www.bind.org -- BIND (Berkeley Internet Name Daemon) is the de facto DNS server for the Internet. In many respects, DNS was developed on top of BIND.
- Sendmail -- http://www.sendmail.org -- Sendmail is the #1 share mail transfer agent on the Internet today.
- Squid -- http://squid.nlanr.net -- Squid is an OSS Proxy server based on the ICP protocol. Squid is somewhat popular with large international ISPs although it's performance is lacking.
- SAMBA -- http://samba.org -- SAMBA provides an SMB file server for UNIX. Recently, the SAMBA team has managed to reverse engineer and develop an NT domain controller for UNIX as well. SGI employs one of the SAMBA leads. http://www.sonic.net/~roelofs/reports/linux-19980714-phq.html: " By the end of the year ... Samba will be able to completely replace all primary NT Server functions."
- KDE -- http://www.kde.org -- "K" Desktop Environment. Combines integrated browser, shell, and office suite for Unix desktops. Check out the screen shots at:http://www.kde.org/kscreenshots.html and http://www.kde.org/koffice/index.html.
- Majordomo -- the dominant mail list server on the Internet is writtenentirely in PERL via OSS.
In general, a lot more thought/discussion needs to put into Microsoft's response to the OSS phenomena. The goal of this document is education and analysis of the OSS process, consequently in this section, I present only a very superficial list of options and concerns.
Where is Microsoft most likely to feel the "pinch" of OSS projects in the near future?
Server vs. Client
The server is more vulnerable to OSS products than the client. Reasons for this include:
- Clients "task switch" more often -- the average client desktop is used for a wider variety of apps than the server. Consequently, integration, ease-of-use, fit & finish, etc. are key attributes.
- Servers are more task specific -- OSS products work best if goals/precedents are clearly defined -- e.g. serving up commodity protocols
- Commodity servers are a lower "commitment" than clients -- Replacing commodity servers such as file, print, mail-relay, etc. with open source alternatives doesn't interfere with the end-user's experience. Also, in these commodity services, a "throw-away" "experimental" solution will often by entertained by an organization.
- Servers are professionally managed -- This plays into OSS's strengths in customization and mitigates weaknesses in lack of end-user ease of use focus.
Capturing OSS benefits -- Developer Mindshare
The ability of the OSS process to collect and harness the collective IQ of thousands of individuals across the Internet is simply amazing. More importantly, OSS evangelization scales with the size of the Internet much faster than our own evangelization efforts appear to scale.
How can Microsoft capture some of the rabid developer mindshare being focused on OSS products?
Some initial ideas include:
- Capture parallel debugging benefits via broader code licensing -- Be more liberal in handing out source code licenses to NT to organizations such as universities and certain partners.
- Provide entry level tools for low cost / free -- The second order effect of tools is to generate a common skillset / vocabulary tacitly leveraged by developers. As NatBro points out, the wide availability of a consistent developer toolset in Linux/UNIX is a critical means of implicitly coordinating the system.
- Put out parts of the source code -- try to generate hacker interest in adding value to MS-sponsored code bases. Parts of the TCP/IP stack could be a first candidate. OshM points out, however that the challenge is to find some part of MS's codebase with a big enough Noosphere to generate interest.
- Provide more extensibility -- The Linux "enthusiast developer" loves writing to / understanding undocumented API's and internals. Documenting / publishing some internal API's as "unsupported" may be a means of generating external innovations that leverage our systems investments. In particular, ensuring that more components from more teams are scriptable / automatable will help ensure that power users can play with our components.
- Creating Community/Noosphere . MSDN reaches an extremely large population. How can we create social structures that provide network benefits leveraging this huge developer base? For example, what if we had a central VB showcase on Microsoft.com which allowed VB developers to post & published full source of their VB projects to share with other VB developers? I'll contend that many VB developers would get extreme ego gratification out of having their name / code downloadable from Microsoft.com.
- Monitor OSS news groups . Learn new ideas and hire the best/brightest individuals.
Capturing OSS benefits -- Microsoft Internal Processes
What can Microsoft learn from the OSS example? More specifically: How can we recreate the OSS development environment internally? Different reviewers of this paper have consistently pointed out that internally, we should view Microsoft as an idealized OSS community but, for various reasons do not:
- Different development "modes". Setting up an NT build/development environment is extremely complex & wildly different from the environment used by the Office team.
- Different tools / source code managers. Some teams use SLM, other use VSS. Different bug databases. Different build processes.
- No central repository/code access. There is no central set of servers to find, install, review the code from projects outside your immediate scope. Even simply providing a central repository for debug symbols would be a huge improvement. NatBro:
"a developer at Microsoft working on the OS can't scratch an itch they've got with Excel, neither can the Excel developer scratch their itch with the OS -- it would take them months to figure out how to build & debug & install, and they probably couldn't get proper source access anyway"
- Wide developer communication . Mailing lists dealing with particular components & bug reports are usually closed to team members.
- More component robustness . Linux and other OSS projects make it easy for developers to experiment with small components in the system without introducing regressions in other components: DavidDs:
"People have to work on their parts independent of the rest so internal abstractions between components are well documented and well exposed/exported as well as being more robust because they have no idea how they are going to be called. The linux development system has evolved into allowing more devs to party on it without causing huge numbers of integration issues because robustness is present at every level. This is great, long term, for overall stability and it shows."
The trick of course, is to capture these benefits without incurring the costs of the OSS process. These costs are typically the reasons such barriers were erected in the first place:
- Integration. A full-time developer on a component has a lot of work to do already before trying to analyze & integrate fixes from other developers within the company.
- Iterative costs & dependencies. The potential for mini-code forks between "scratched' versions of the OS being used by one Excel developer and "core" OS used by a different Excel developer.
Extending OSS benefits -- Service Infrastructure
Supporting a platform & development community requires a lot of service infrastructure which OSS can't provide. This includes PDC's, MSDN, ADCU, ISVs, IHVs, etc.
The OSS communities "MSDN" equivalent, of course, is a loose confederation of web sites with API docs of varying quality. MS has an opportunty to really exploit the web for developer evangelization.
Blunting OSS attacks
Generally, Microsoft wins by attacking the core weaknesses of OSS projects.
De-commoditize protocols & applications
OSS projects have been able to gain a foothold in many server applications because of the wide utility of highly commoditized, simple protocols. By extending these protocols and developing new protocols, we can deny OSS projects entry into the market.
David Stutz makes a very good point: in competing with Microsoft's level of desktop integration, " commodity protocols actually become the means of integration" for OSS projects. There is a large amount of IQ being expended in various IETF working groups which are quickly creating the architectural model for integration for these OSS projects.
Some examples of Microsoft initiatives which are extending commodity protocols include:
- DNS integration with Directory . Leveraging the Directory Service to add value to DNS via dynamic updates, security, authentication
- HTTP-DAV . DAV is complex and the protocol spec provides an infinite level of implementation complexity for various applications (e.g. the design for Exchange over DAV is good but certainly not the single obvious design). Apache will be hard pressed to pick and choose the correct first areas of DAV to implement.
- Structured storage . Changes the rules of the game in the file serving space (a key Linux/Apache application). Creates a compelling client-side advantage which can be extended to the server as well.
- MSMQ for Distributed Applications . MSMQ is a great example of a distributed technology where most of the value is in the services and implementation and NOT in the wire protocol. The same is true for MTS, DTC, and COM+.
Make Integration Compelling -- Especially on the server
The rise of specialty servers is a particularly potent and dire long term threat that directly affects our revenue streams. One of the keys to combating this threat is to create integrative scenarios that are valuable on the server platform. David Stutz points out:
The bottom line here is whoever has the best network-oriented integration technologies and processes will win the commodity server business. There is a convergence of embedded systems, mobile connectivity, and pervasive networking protocols that will make the number of servers (especially "specialist servers"??) explode. The general-purpose commodity client is a good business to be in - will it be dwarfed by the special-purpose commodity server business?
- System Management . Systems management functionality potentially touches all aspects of a product / platform. Consequently, it is not something which is easily grafted onto an existing codebase in a componentized manner. It must be designed from the start or be the result of a conscious re-evaluation of all components in a given project.
- Ease of Use . Like management, this often must be designed from the ground up and consequently incurs large development management cost. OSS projects will consistently have problems matching this feature area
- Solve Scenarios . ZAW, dial up networking, wizards, etc.
- Client Integration . How can we leverage the client base to provide similar integration requirements on our servers? For example, MSMQ, as a piece of middleware, requires closely synchronized client and server codebases.
- Middleware control is critical . Obviously, as servers and their protocols risk commoditization higher order functionality is necessary to preserve margins in the server OS business.
- Release / Service pack process. By consolidating and managing the arduous task of keeping up with the latest fixes, Microsoft provides a key customer advantage over basic OSS
- Long-Term Commitments. Via tools such as enterprise agreements, long term research, executive keynotes, etc., Microsoft is able to commit to a long term vision and create
a greater sense of long term order than an OSS process.
Other Interesting Links
- http://www.lwn.net/ -- summarizes the weeks events in Linux development world.
- Slashdot -- http://slashdot.org/ -- daily news / discussion in the OSS
- http://news.freshmeat.net/ -- info on the latest open source releases & project updates
Many people provided, datapoints, proofreading, thoughtful email, and analysis on both this paper and the Linux analysis: