Looking at Unix
All information herein is the view of the author. Use at your own risk, no warranty of any kind is provided.

INTRO and CONTENT:

OOPS, Talking about Unix on a RISC OS Site?. Yep, we talk about some strange things around here. This document written before breakfast one morning. While accuracy was attempted at every turn, please forgive any small errors.

Unix (and work alikes like Linux et al) is by nature a timesharing system, and not a very good option for personal computers because of this. That said there are some things about Unix that are good, and can be applied to Personal Computer Operating Systems. In this document I intend to look a little bit at some of both those things that make Unix a poor choice for personal computing as well as at those things that are useful for personal computing that come from the Unix world.

Every attempt is made to avoid technical details in this document, while still providing enough information to make the important points.

The sections of this document are:

  • Unix in General.
  • Why Unix is not a Desktop OS.
  • Good things from Unix.


Unix in General:

Unix is an OS that evolved in an environment where MiniComputers were the norm, and as such a timesharing OS made good sense. And Unix is still a good Timesharing OS to this day, and has a lot of features that are only of use to a timesharing OS.

The core features of what makes Unix the OS it is include:

  • Multiple user accounts, with multiple user at a time access.
  • Remote login support at a low level.
  • A unified tree filesystem.
  • The everything is a file abstraction, including all devices.
  • Minimal error checking.
  • Memory Mapping
  • The Shell is a user program.
  • Controlled Access.
  • Shell syntax based on sh (usually with enhancements).
  • Powerful command line utilities that can be easily piped together.
  • A strong tie between the C Programming Language and the OS API (to the point of having a C Standard Library implementation as a core OS component).
There is also the Unix Philosophy, though most modern Unix and Unix like systems no longer adhere to this (since about the early 1980s to present it is lost in most Unix like systems).

It should be easy to see that many of these features do not fit well on a single user at a time computer that should only be used locally at the computer. Unfortunately many modern OS's copy the Timesharing features of Unix, making those OS's useless for most single user computers, and making them systems that should never connect to the internet.

Unix is in its nature a very well designed OS for timesharing. Timesharing is the use of a single computer by multiple remote (at separate terminals) users at a time. This can be very useful for sharing the resources of a very high end expensive computer amongst multiple users at a location, thus saving operating costs. This also makes the system potentially vulnerable to unwanted access, and thus makes it best to keep networks with such a system off of the internet in most cases.

The X Windowing System: A GUI that is designed for timesharing by nature (remote execution of Graphical Applications), and as such has become part of most Unix like OS's. The X Window System is not limited to Unix like Systems, though it is not very useful on most other Systems (just timesharing systems). While a good design for systems that require the running of applications on a remote computer on the network, this makes the architecture of X such as to be poor for use on a local system. There have been attempts to overcome the limits of X on a local system, though they are still bound to the timesharing philosophy of Unix like systems.

Single User Mode Note: It is possible to bring many Unix systems up in a single user mode. It is also possible to configure many Unix systems to be single user only, and only accessible to the local system. If done well and with care not to have any daemons or other tasks that allow remote access of any kind this can make Unix a usable Desktop system. Even in single user mode though some of the way things are done internally are not ideal for optimal operation on a local workstation or desktop computer, because they were designed to work well in a Timesharing environment.


Why Unix is not a Desktop OS:

When we speak of a Desktop OS what are we talking about? In the simplest terms we are speaking of an Operating System that is used by a person on a computer that they are directly using, and is only used by one person at a time. Some times we do use these systems from other computers (eg with VNC or similar), though in general the computer is only used by one person at a time.

It is often forgotten that the above definition also fits most modern terminals used to access remote data. Most modern multiple point access systems do not use a system level login, instead serving the data through a much more restricted interface (often something like a "web" front end).

These systems do not need the features of a timesharing OS like Unix. Everything these systems are used for can be done without the multiple user at a time stuff, and without the remote operation stuff that define Timesharing OS's like Unix. These systems are only used by a single person at a time as far as the OS is concerned, and as such only need to support a single user at a time.

Unix does many things in ways that are only of benefit on multiple user at a time OS's, and end up making its use on a local system much less efficient. The way it uses terminals for shell access (even on the local system) is way less efficient than a truly local screen system. The way that X uses an API designed to be network transparent, and thus imposes this slower means of API calls on even local tasks will always be less efficient than a truly local API GUI designed to run only locally.

We could look at some of the core architecture of Unix, and show more reasons that it is not a good fit for desktop usage, though we are attempting to keep this document as non-technical as possible.

One thing that Unix usually supports in its native filesystems is the concept of hard linking, that is more than one directory entry that points to the same metadata for the same entry in the filesystem. This is similar to crosslinking, though in a way that is meant to be done. This is one feature that seems to be of no real use, as it holds the potential to cause way more problems than it solves. There is a better solution in most Unix and Unix like systems, that is symbolic linking.

UNIX is good at MultiUser Systems: Unix is a very good OS for systems that need to be shared by many users on the same network (or by other attached terminals). This is very useful for some kinds of workflow, where the main computer is a very high end workstation. Even more so if the main computer is a Supercomputer that may cost millions of dollars to install, and may take enough power to be a significant cost, as well as the increased maintenance costs. So Unix has its place, and it is a very good OS where it fits.

Unix also has its place in distributed computing on a local network. This is really a form of supercomputing, so not so much an addition to the above, more a clarification. The unified Filesystem structure of Unix as well as the everything is a file abstraction make unix very good for this kind of application. Of course it also helps that Unix being a clean Multithreaded OS implementation makes distributing the workload across multiple processors (potentially in multiple computers) very simple to accomplish as well.

Of course there are some things that come from the Unix World that are quite useful for personal computers, and that leads us to the next section.


Good Things From Unix:

This document is primarily about Unix as applied to Personal Computing. This section looks at those things that come from Unix that are very much useful to personal computer usage.

Some very useful features, even for Personal Computing, that many modern OS's have that come from Unix include:

  • A user program provides the default Shell (originally from Multix, though got to us by way of Unix).
  • Input and Output redirection for command line utilities.
  • Abstracting many devices as files.
  • Pipes, when implemented for local only operation.
  • Multi threading (Unix Systems popularised it).

There are also many useful utilities that got there start on Unix and Unix like OS's, many of which we all use today on just about every OS. These include in part:

  • grep : One of the most useful command line content searching utilities, used on almost all OS's today.
  • dd : An extremely useful utility for working with files in chunks, or mass storage images, or many other things.
  • ps : To view running process / tasks (sometimes under different command names).
  • top : Another process management program.
  • cc : Often by other names, the C Compiler at the command line.
  • svn : Source Version management system.
  • ftp : TCP/IP File Transfer Protocol client.
  • slrn : NNTP NewsReader with most modern features (descendant from rn)
  • gopher : Simple text mode Gopher Protocol client.
  • sendmail : Simple Sendmail client, for sending E-Mail.
  • mail : Simple command line mail reader.
  • diff : Utility to compare, and create files describing the difference of two or more text files. Also used to update to the changes.

There are others as well. The list could easily go on for many pages.

UNIX PHILOSOPHY: Much of the conceptual "Unix Philosophy" is a positive thing for any kind of OS. The concept of keep it simple always is adventitious to any system, as is the concept of having multiple smaller applications working together to do way better than a large application can. These are the parts of the Unix Philosophy that used to apply almost universally to RISC OS software as well.

There are also Unix tools that have made there way around that are not as useful, many of which have better options native to other OS's, though do to there Unix origin are used in preference to the better options by some people, and often pushed making some negative for the other OS's. These I will not mention in much detail, though they include 'make', 'configure', 'yacc', 'awk', 'vi', 'lex', 'ed' and similar Unix utilities.

Some Laments:

Online Help: Unix had a very good online help system, known as man pages, unfortunately these ended up going unmaintained, and have largely fallen to the point of unusable now. Other OS's have had even better online help systems, though in many cases these have fallen to the way side (favouring web help, bad news). Thankfully the use of !Help files and StrongHelp in RISC OS continues to be well kept for us.

Many Small Together is Better: For its first 15 years Unix mostly followed its philosophy of using multiple simple programs together to accomplish a complex task, this made a powerful and maintainable system that was easier to use. RISC OS also had the culture of using many small WIMP based programs together to provide more powerful, simple, and dynamic usage than can be done with a large program by itself. Unfortunately for both systems we now see it more common for people to try to create and use Mega Applications that are less useful, harder to use overall, and more restricted in there usage.

Low Resource Usage: Unix began life on computers with less than 64 K-Words of RAM, sometimes a lot less, it ran well there even without storage paged memory. Now most Unix like OS's use hundreds of MB of RAM (K-Word = 1024 machine words, MB = 1024*1024 Bytes [1048576 Bytes]). Thankfully RISC OS continues to be reasonably lean, still being possible to boot using less than 8MB of RAM, with about 6MB being the OS ROM Image (normal RISC OS config takes about 14MB to 18MB at boot).

Simple Compilers: Unix helped teach the importance of keeping even the compilers simple and small. Unix helped teach us that we use a small fast compiler that does minimal optimisation, with a IM optimiser to improve the optimisation, and a separate peep-hole optimiser to finish up. This taught us that it is the programmers job to produce reasonable code, that is optimally organised (the rules for Optimal High Level code do not change in general), the compiler should not compensate for poorly optimized input High Level Code. Unfortunately now compilers attempt to compensate for optimisation errors of the programmers, and attempt to do a lot of messing around with the functional structure of the code to do so, this makes large slow wasteful compilers, and sometimes breaks code when using good optimisation practices in your code.

Memory Leaks: In the days of resource limited systems, as well as when the Unix Philosophy was actually followed, people were careful to take every possible care to avoid introducing any memory leaks into code. This is still just as important of a practice, even on our large memory systems of today (just because it is there is not a valid reason to waste it). Unfortunately some programmers no longer consider a memory leak to be the major show stopper that it still is.

I would like to point out that there exist multiple Linux distributions that will boot to a full Desktop Environment with the modern look and feel, and do so in less than 16MB of RAM (the one I sometimes use takes 12MB at boot). It is possible on these to have a complete up to date web browser (a few options) that brings the total system memory usage to less than 150MB even with a lot of ECMA Script heavy HTML5 pages loaded and active. And this could be improved if programmers cared to do a good job again (the web browsers do have memory leaks, and provable bloat).