1969 was a hell of a good year. The muscle cars coming out of Detriot were at their purist form. Rock and Roll was making history. Neil Armstrong took his (and mankind's) first step on the Moon. And Ken Thompson began development of the UNIX computer operating system. It was also in December of that year that Linus Torvalds was born. The impact of this last event would not be felt by the World at large for many years into the future.

The below Usenet posting is just about the only warning the World was to receiveof the impending impact of one Linus Benedict Torvalds:

[comp.os.minix posting]
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Summary: small poll for my new operating system
Message-ID: <1991Aug25.205708.9541@klaava.Helsinki.FI>
Date: 25 Aug 91 20:57:08 GMT
Organization: University of Helsinki

Hello everybody out there using minix -
I'm doing a (free) operating system (just a hobby, won't be big and
professional like gnu) for 386(486) AT clones. This has been brewing
since april, and is starting to get ready. I'd like any feedback on
things people like/dislike in minix, as my OS resembles it somewhat
(same physical layout of the file-system (due to practical reasons)
among other things).
I've currently ported bash(1.08) and gcc(1.40), and things seem to work.
This implies that I'll get something practical within a few months, and
I'd like to know what features most people would want. Any suggestions
are welcome, but I won't promise I'll implement them :-)
Linus (torvalds@kruuna.helsinki.fi)
PS. Yes - it's free of any minix code, and it has a multi-threaded fs.
It is NOT protable (uses 386 task switching etc), and it probably never
will support anything other than AT-harddisks, as that's all I have :-(.

With no further public announcement Linux-0.01 was released on September 17, 1991. This release was quickly followed by additional, improved releases which, well over a decade later, are still coming. But now we are starting to get ahead of our story.

The deepest relevent roots to the Linux computer operating system go back to the beginning of "time sharing systems". Below is a "60 mile per hour photography" look at the early history of time sharing systems. Contained therein are people, projects, and places that will serve as keywords for googling for more depth. I leave that as an exercise for the reader. :-)

John McCarthy wrote a memo to his department head at MIT, on January 1, 1959. In the memo he addresses some important issues in implementing a time sharing system.

With the departure of John McCarthy for Stanford, Fernando J. Corbato took the lead of the Compatible Time Sharing System project. First demonstrated in November of 1961 at the MIT Computation Center. An Experimental Time Sharing System, a paper that presented at the 1962 Spring Joint Computer Conference described CTSS.

Dr. J.C.R. Licklider became the director of the Information Processing Techniques Office in October 1962. This was a newly created office with ARPA.

Through the office of IPTO, Licklider funded research into advanced computer and network technologies, and commissioned thirteen research groups to perform research into technologies related to human computer interaction and distributed systems. Each group was given a budget thirty to forty times as large as a normal research grant, complete discretion as to its use, and access to state-of-the-art technology.

Two of these groups was at MIT. One group was behind the CTSS and with the new funding lauched MULTICS.

The other developed the ITS and because the Artificial Intelligence Lab. We will visit this later.

CTSS was used to bootstrap the MULTICS project. The three groups that became involved in this was MIT, GE provided hardware, and Bell Labs (which was looking for a new operating system for in-house use).

I have been unable to find any material that cites just who from Bell Labs worked on the MULTICS project. *I am pretty certain from implied comments that at least Ken Thompson and Dennis Ritchie were involved.

In any case, in the late 1968/early 1969 Bell decided that the MULTICS project was a boondoogle and pulled out of it. The personnel that were involved full time in that project become unassigned.

An aspect of the computing environment of that era I waant to emphasize is that all computing was done within the batch processing paradigm. The heavy accent here in on the work "all". Interactive computing was virtually unheard of. MULTICS was an Ivory Tower project in the land of Academia.

[digress for some background on what batch processing is.]

[The server/client conceptualization had not yet been developed.]

It is less than surpising that our intrepid heros, cast out of the Eden of time-sharing computing into the hell of batch processing, were not thrilled by the prospect.

Like all geeks our heros like to play computer games and while bumming around the Bell Labs during that summer of '69 they found a little used (or wanted) PDP-7. With the familarity of implementing his Space Travel and knowing that the PDP-7 was available for his essentially unlimited use Ken Thompson created his new operating system on that machine. He started by implementing the filesystem that the group of heros had designed on paper and blackboard. Or in other words...

In the beginning was the filesystem. To go with this Ken implemented the Multics notion of a process as a locus of control. The filesystem was nearly as we know it today. It took a couple of years for many of the other components to fall into place. For example, pipes were not added until 1972.

This is a history of Linux so I am going to skip a lot of the low level details and simply highlight the features we are interested in of what escaped the Bell Labs.

How Unix did escape from Bell Labs requires yet another detour. As a result of a court case in the 1950s Bell was precluded from getting into the computer operating business (or any other business for that matter). The reasoning for this is simple. Bell had a government approved/granted monopoly. This court ruling was for the purpose of preventing Bell from using "monopoly rent" to leverage their way into over market sectors.

As with their earlier operating system Bell would provide Unix to those who asked with the understanding that there was not support. Thus, mostly university labs would receive a package of computer tapes with a note: Here are the tapes you requested. Love, Ken.

The tree structure of the filesystem was borrowed from Multics. Ken Thompson streamlined the idea by making everything a file. Data files, devices, I/O, everything is considered a file. As mentioned previously, any action to be taken is a process. Thus, there are six fundamental ideas from which all of Unix derives. Fork, exec, open, close, read, and write. The first two concern processes and the other four have to do with files. With these six we have a working system. The rest of Unix just makes doing things easier.

I/O redirection is a powerful idea. <, >, and |. Input, output, and pipe.

Doug McIlroy, the inventor of pipes and one of the founders of the Unix tradition.

  1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.
  2. Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.
  3. Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
  4. Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.

Rob Pike, one of the great early masters of C, offers six rules of program design:

  1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is.
  2. Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest.
  3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. (Even if n does get big, use Rule 2 first.)
  4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.
  5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
  6. There is no Rule 6.

More of the Unix philosophy was implied not by what these elders said but by what they did and the example Unix itself set. Looking at the whole, we can abstract the following ideas:

  1. Rule of Modularity: Write simple parts connected by clean interfaces.
  2. Rule of Composition: Design programs to be connected to other programs.
  3. Rule of Clarity: Clarity is better than cleverness.
  4. Rule of Simplicity: Design for simplicity; add complexity only where you must.
  5. Rule of Transparency: Design for visibility to make inspection and debugging easier.
  6. Rule of Robustness: Robustness is the child of transparency and simplicity.
  7. Rule of Least Surprise: In interface design, always do the least surprising thing.
  8. Rule of Repair: When you must fail, fail noisily and as soon as possible.
  9. Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.
  10. Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.
  11. Rule of Representation: Use smart data so program logic can be stupid and robust.
  12. Rule of Separation: Separate policy from mechanism; separate interfaces from engines.
  13. Rule of Optimization: Prototype before polishing. Get it working before you optimize it.
  14. Rule of Diversity: Distrust all claims for “one true way”.
  15. Rule of Extensibility: Design for the future, because it will be here sooner than you think.

Rule of Modularity:
The only way to write complex software that won't fall on its face is to hold its global complexity down — to build it out of simple parts connected by well-defined interfaces, so that most problems are local and you can have some hope of upgrading a part without breaking the whole.

Rule of Composition:
Unix tradition puts a lot of emphasis on writing programs that read and write simple, textual, stream-oriented, device-independent formats. Under classic Unix, as many programs as possible are written as simple filters, which take a simple text stream on input and process it into another simple text stream on output.

GUIs can be a very good thing. Complex binary data formats are sometimes unavoidable by any reasonable means. But before writing a GUI, it's wise to ask if the tricky interactive parts of your program can be segregated into one piece and the workhorse algorithms into another, with a simple command stream or application protocol connecting the two. Before devising a tricky binary format to pass data around, it's worth experimenting to see if you can make a simple textual format work and accept a little parsing overhead in return for being able to hack the data stream with general-purpose tools.

Rule of Clarity:
Because maintenance is so important and so expensive, write programs as if the most important communication they do is not to the computer that executes them but to the human beings who will read and maintain the source code in the future (including yourself).

The implications of this advice go beyond just commenting your code. Good Unix practice also embraces choosing your algorithms and implementations for future maintainability. Buying a small increase in performance with a large increase in the complexity and obscurity of your technique is a bad trade — not merely because complex code is more likely to harbor bugs, but also because complex code will be harder to read for future maintainers.

Code that is graceful and clear, on the other hand, is less likely to break — and more likely to be instantly comprehended by the next person to have to change it. This is important, especially when that next person might be yourself some years down the road.

Rule of Simplicity:
There are many pressures which tend to make programs more complicated (and therefore more expensive and buggy). One is technical machismo. Programmers are bright people who are (justly) proud of their ability to handle complexity and juggle abstractions. Often they compete with their peers to see who can build the most intricate and beautiful complexities. Just as often, their ability to design outstrips their ability to implement and debug, and the result is expensive failure.

Often (at least in the commercial software world) excessive complexity comes from project requirements that are based on the marketing fad of the month rather than the reality of what customers want or software can actually deliver. Many a good design has been smothered under marketing's pile of “check-list features” — features which, often, no customer will ever use. And a vicious circle operates; the competition thinks it has to compete with chrome by adding more chrome. Pretty soon, massive bloat is the industry standard and everyone is using huge, buggy programs not even their developers can love.

Either way, everybody loses in the end.

The only way to avoid these traps is to encourage a software culture that actively resists bloat and complexity — an engineering tradition that puts a high value on simple solutions, looks for ways to break program systems up into small cooperating pieces, and reflexively fights attempts to gussy up programs with a lot of chrome (or, even worse, to design programs around the chrome).

Rule of Transparency:
Because debugging often occupies three-quarters or more of development time, work done early to ease debugging can be a very effective investment. A very effective way to accomplish this is to design for transparency and discoverability.

A software system is transparent when you can look at it and immediately understand what it is doing and how. It is discoverable when it has facilities for monitoring and display of internal state so that your program not only functions well but can be seen to function well.

Rule of Least Surprise:
The easiest programs to use are those which demand the least new learning from the user — or, to put it another way, the easiest programs to use are those that connect to the user's pre-existing knowledge most effectively.

Avoid gratuitous novelty and excessive cleverness in interface design. Pay attention to tradition. The Unix world has rather well-developed conventions about things like the format of configuration and run-control files, command-line switches, and the like. These traditions exist for a good reason — to tame the learning curve. Learn and use them.

Rule of Repair:
aka Postel's Prescription[3]: “Be liberal in what you accept, and conservative in what you send.” Postel was speaking of network service programs, but the underlying idea is more general. Well-designed programs cooperate with other programs by making as much sense as they can from ill-formed inputs; they either fail noisily or pass strictly clean and correct data to the next program in the chain.

Rule of Economy:
Times have changed from the days of mega-buck mainframe computers. Machine cycles and disk storage are cheap.

If we took this maxim really seriously throughout software development, the percentage of applications written in higher-level languages like Perl, Tcl, Python, Java, Lisp and even shell — languages that ease the programmer's burden by doing their own memory management would be rising fast.

And indeed this is happening within the Unix world, though outside it most applications shops still seem stuck with the old-school Unix strategy of coding in C (or C++). Later in this book we'll discuss this strategy and its tradeoffs in detail.

Rule of Generation:
Human beings are notoriously bad at sweating the details. Accordingly, any kind of hand-hacking of programs is a rich source of delays and errors. The simpler and more abstracted your program specification can be, the more likely it is that the human designer will have gotten it right. Generated code (at every level) is almost always cheaper and more reliable than hand-hacked.

Rule of Representation:
Even the simplest procedural logic is hard for humans to verify, but quite complex data structures are fairly easy to model and reason about.

Data is more tractable than program logic. It follows that where you see a choice between complexity in data structures and complexity in code, choose the former. More: in evolving a design, you should actively seek ways to shift complexity from code to data.

Rule of Separation:
Hardwiring policy and mechanism together has two bad effects; it make policy rigid and harder to change in response to user requirements, and it means that trying to change policy has a strong tendency to destabilize the mechanisms.

On the other hand, by separating the two we make it possible to experiment with new policy without breaking mechanisms. This design rule implies that we should look for ways to separate interfaces from engines.

One way to do this is to separate your application into cooperating front-end and back-end processes communicating via a specialized application protocol over sockets. The front end implements policy, the back end mechanism. The global complexity of the pair will often be far lower than that of a single-process monolith implementing the same functions, reducing your vulnerability to bugs and lowering life-cycle costs.

Additionally, make the configuration file for the front-end human-parseable. This makes it easy for users to change the behavior policy of an application.

Rule of Optimization:
The most basic argument for prototyping first is Kernighan & Plauger's; “90% of the functionality delivered now is better than 100% of it delivered never.” Prototyping first may help keep you from investing far too much time for marginal gains.

“Premature optimization is the root of all evil.”

Rule of Diversity:
Nobody is smart enough to optimize for everything, nor to anticipate all the uses to which their software might be put. Designing rigid, closed software that won't talk to the rest of the world is an unhealthy form of arrogance.

Rule of Extensibility:
Leave room for your code to grow. When you write protocols or file formats, make them sufficiently self-describing to be extensible. When you write code, organize it so future developers will be able to plug new functions into the architecture without having to scrap and rebuild the architecture. Make the joints flexible, and put “If you ever need to...” comments in your code. You owe this grace to people who will use and maintain your code after you. When you design for the future, the sanity you save may be your own.

Why Open Source?

What makes the open source model so great? the internet facilitates a distributed network scaling effect

The GPL and some other licenses ensure that the work stays open and does not contribute to a proprietary shop that would steal and not give back.

reputation

personal itch

Linus has been saying for a number of years now that his job is to say "no". Additionally, there has been an conscious evolutionary theme that is sought. By this if multiple scheme arise to solve some problem they are allowed to develop. Experience has shown that often two projects that start out very much the same diverge or interest in one will be lost in favor of the other.

Of late, Linus does not accept new features that are untested anymore. There are a number of tress maintained by recognized people. If you wanted to get a brand new feature into Linux the way to do is to get one of these people to add it to their tree. Then when it has been well test, talked about on the list, and Linus see that a variety of folks are using it. Then he would be open to accepting it into his tree.

In this way Linus acts as a filter. He says no to most stuff and only says yes to stuff that can be seen as a step forward without introducing any new problems.

Just because a developer writes some code does not mean it will get into the kernel. If you watch the mailing list over a period of time you will notice would be contributors arrive on the scene with ideas to fix what is wrong with the Linux kernel. A large number of these folks have zero impact. The first filter is "show me the code". Even if the code doesn't really work but is more along the lines of a proof-of-concept if the idea behind it is interesting it will get picked up and played with by some other developer(s). When the code gets to patch form some folks may apply it and try it out. If they report good results from testing others may try it

Modules
The introduction of modules and subsequent development has greatly enhanced the flexiblity of the kernel. Even if one compiles the desired functionality into the kernel, modules still provide a component structure to the code.