Linux in the Library

What can it do for you?

Last modified October 21, 2008



This page grew out of a Linux presentation I gave at the 1999 Customers of Dynix Inc. (CODI) conference. I continue to update this page as time permits. Portions of it have been used to give presentations at the following conferences:

The original version is kept here for historical purposes and to see if my writing style and html editing skills have gotten any better. ;-)

For those who aren't familiar with SirsiDynix, they are a vendor of Integrated Library Systems. Their two primary products are called Dynix (their legacy product) and Horizon (their current product). The Horizon database server is supported on Linux, as are some of their middleware products. Presently there is no Linux client for Horizon, but the vendor is planning to add one. Our Horizon database server originally ran on HP-UX, but was migrated to Linux in 2006.

It might be of interest to you even if you don't work for a Library, but are curious about what Linux can do. This is by no means a complete list of all the things that are possible, just some of the things we've used it for.

Comments, suggestions, opinions & questions are welcome.
Please send them to: Eric Sisler (esisler@cityofwestminster.us)


Index

Although everything listed in the index is on this web page, I've included it in the event you want to look at a particular section or just want to know what you're getting yourself into. ;-) Enjoy!

Part I: Introduction & overview

  1. Introduction
  2. How did I get started with Linux?
  3. Why did I recommend Linux?

Part II: Library facilities

  1. College Hill Library
    1. Network Infrastructure
    2. Linux Servers
    3. PC Configuration
  2. 76th Avenue Library
  3. Irving Street Library
    1. Network Infrastructure
    2. Linux Servers
    3. PC Configuration

Part III: What is Linux?

  1. The Linux kernel, GNU, BSD & friends
  2. What is a Linux distribution?
  3. What distributions are available?
  4. What comes with a Linux distribution?
  5. Will Linux work with other operating systems?

Part IV: Choosing Linux

  1. Choosing Linux - direct costs
    1. Hardware
    2. Software - The Red Hat conundrum
  2. Choosing Linux - indirect costs
    1. Services
    2. The Learning curve (time)
    3. Learning materials
    4. Pre-production server hardware
    5. Remote system administration
    6. Ongoing system administration
  3. Choosing Linux - stability & performance
    1. Stability
    2. Performance
    3. Rebooting
    4. The Kernel & other processes
    5. Shared libraries
    6. Software "bit-rot"
    7. Disaster recovery
  4. Choosing Linux - support
  5. Choosing Linux - updates & source code
    1. Software updates
    2. Software package management with RPM
    3. Source code

Part V: Securing Linux

  1. Package updates
  2. Least privilege
  3. Proper configuration of running services
  4. Disabling and/or removal of unnecessary packages & services
  5. Using the Linux firewall tools protect a server
  6. Logfiles & monitoring tools
  7. Backup, verification & recovery
  8. Uninterruptable Power Supply (UPS)
  9. Windows viruses, worms & exploits

Part VI: What we use Linux for at the Library

  1. Domain Name System (DNS)
  2. Samba (network file & print services)
  3. Apache (Internet web server)
  4. Internet filter & cache (Smart Filter & Squid)
  5. Dynamic Host Configuration Protocol (DHCP)
  6. Automating tasks with cron
  7. Internet e-mail
  8. Running listservs with Mailman
  9. Rsync (file & directory synchronization)
  10. Network Time Protocol (NTP)
  11. Trivial File Transfer Protocol (TFTP)
  12. Calcium web scheduling software
  13. Secure communication with OpenSSH
  14. Remote logging with syslog
  15. Perl programming
  16. Web development with LAMP
  17. VMware virtual servers
  18. Firewalling with iptables
  19. CUPS printing
  20. Revision control with RCS
  21. Bugzilla
  22. Ethereal packing sniffing
  23. Linux on the desktop
  24. Future projects at the Library using Linux

Part VII: Wrap up

  1. Sources & further reading
  2. Thank you's
  3. Dedication



PART I: INTRODUCTION & OVERVIEW


Introduction - who am I, anyway?

I am Eric Sisler, Library Applications Specialist for the City of Westminster. I have worked for the Library for 21+ years in various jobs: page, circulation clerk, courier, bookmobile operator & staff, technical services processor and cataloger. I am currently part of the Library's two member Automation Services department, providing computer and network support to two library facilities. My primary responsibilities include the care & feeding of 12 Linux servers (7 physical / 5 virtual), 7 Windows servers (1 physical / 6 virtual), assorted network gear and far too many client PCs.

Index


How did I get started with Linux?

In 1996, we were in the process of moving our Dynix system from a shared HP-UX box to its own HP-UX box. (HP-UX is Hewlett Packard's proprietary version of Unix.) I was just beginning to learn more about Unix in general and wanted something I could use as a learning tool without the worry of destroying it. Linux fit the bill nicely - a distribution could be had for around $50 and it would run on my home PC. We were also in the process of planning the automation needs for a new library facility. I began thinking about what kind of services we wanted to provide and how we might go about doing so. I knew we would be providing access to some CD-ROM databases, so I began experimenting with Linux as a CD-ROM server, discovering it was easily capable of much more. Like many who like to tinker with computers, I have been accused of trying to re-create the Internet in my home out of spare parts!

Index


Why did I recommend Linux?

Obviously, I felt it was the best solution to our needs, which initially included serving some CD-ROM databases, a DNS server and some basic file & print services. Seemingly on its own, this list has grown to include many other services: more file/print services, domain logons & scripts, MARC record storage & retrieval, public PC administration & security, staff & public web pages, Internet object caching, DHCP, shell scripting, task automation and others I've probably missed.

I also felt limited by other Operating Systems for a variety of reasons:

If I had to do it all over again, would I make the same decision? Absolutely! I can't imagine providing all the services currently available as efficiently & reliably any other way.

Index




PART II: LIBRARY FACILITIES


Library facilities - College Hill Library

College Hill Library is a joint project between The City of Westminster and Front Range Community College. The 76,000 square foot facility was opened in April of 1998 and is run by both agencies, a story in itself that is beyond the scope of this document.

Network infrastructure
The network at College Hill Library is a switched environment with 100 Megabit copper connections to all clients. Servers are connected via Gigabit fiber, bonded Gigabit copper or bonded 100 Megabit copper connections. Although College Hill Library is located on the Front Range Community College campus, it is a network unto itself, separate from both the College and City networks. Access to City Hall is provided via a Gigabit fiber optic Wide Area Network (WAN). College Hill has two paths to the Internet: Comcast cable Internet service is used for most traffic from staff & public computers. Additionally, two T-1's shared with the City are used for remote access to library services (like this web page) and for online databases using IP address based authentication.
Linux servers
There are five Linux servers at College Hill Library - Gromit, Nick, Preston, Wendolene & Mr-Tweedy. They provide the services described later in this document.
Initially Gromit was built from desktop class hardware because it was the only hardware available. I had done much of the setup at home and I wanted to prove to myself (and others) that Linux was capable of what I wanted to do with it, even on "regular" hardware. If Linux failed to meet expectations I could re-use the hardware for a different OS. Happily, Linux has met and greatly exceeded expectations. Gromit was moved to server class hardware in December of 1998, partly to add drive space, partly to gain a little more performance, but mostly because Linux had more than proven itself and we wanted to reduce the chance of hardware failure by moving it to better hardware. Gromit's hardware is replaced on a on a regular schedule, and currently lives on the following hardware:
As the library's use of Linux grew, some of the services on Gromit were moved to Preston to balance the load. Preston's hardware is also replaced on a regular basis and currently resides on the following hardware:
College Hill Library PC configuration (113 total):
  • 49 staff PCs.
  • 64 public PCs.
    • 23 Internet.
    • 8 catalog only.
    • 2 word processing.
    • 5 stand-alone children's CD-ROM stations.
    • 1 instructor's workstation (Library instruction classroom).
    • 22 student workstations (Library instruction classroom).
    • 3 SAM sign-up stations (PC time management / print cost recovery).
  • 15 network printers.
  • 4 self-check units.

Index


Library facilities - 76th Avenue Library

76th Avenue library is the former main library for the City of Westminster. Originally built in 1961 and remodeled several times, it is 6,000 square feet in size. The 76th Avenue library was closed in March of 2004, replaced by the Irving Street library.


Library facilities - Irving Street Library

The Irving Street library opened in April of 2004 and is 15,000 square feet in size. It replaced the 76th Avenue Library, located just a few miles away.

Network infrastructure
The network at Irving Street Library is a switched environment with 100 Megabit copper connections to all clients. Servers are connected via Gigabit fiber or 100 Megabit copper connections. Access to City Hall and servers at College Hill are also via a Gigabit fiber optic WAN (Wide Area Network). Like College Hill, Irving Street also has two paths to the Internet: Comcast cable Internet service is used for most traffic from staff & public computers. Additionally, two T-1's shared with the City are used for remote access to library services and for online databases using IP address based authentication.
Linux servers
Irving Street has two Linux servers, Shaun & Mrs-Tweedy. They provide many of the same services found at College Hill. One reason for giving Irving Street its own servers was the small data circuit size available at the time Irving Street opened - 512K. Windows 2000 roaming profiles can chew up a great deal of bandwidth, making retrieving them from College Hill slow and painful. Another reason is to provide some independence between the facilities - a downed data circuit or server at one location does not affect the other as much. Although Irving Street is now connected via Gigabit fiber, there are currently no plans to remove the servers.
Irving Street Library PC configuration (37 total):

By the way - if you're wondering about the naming scheme for our Linux servers, they're all named after characters from Nick Park's excellent claymation series Wallace and Gromit, the motion picture Chicken Run and the motion picture Wallace & Gromit: Curse of the Were-Rabbit.

Index




PART III: WHAT IS LINUX?


The Linux kernel, GNU, BSD, etc.

Index


What is a Linux distribution?

When most people talk about Linux, what they're really talking about is a Linux distribution, which typically comes with the following:

Index


What distributions are available?

There are a number of Linux distributions available. While not a complete list, but some of the better-known ones include:

If one of these distributions isn't to your liking, Distrowatch has an extensive list of them, complete with announcements, reviews and general information.

Index


What comes with a Linux distribution?

As a full-fledged Unix clone, Linux comes with everything you'd expect, and then some. This is by no means a complete list, just a sampling of what's included:

Index


Will Linux work with other operating systems?

Because Linux "speaks" many network protocols, it works well with other operating systems, including:

Index




PART IV: CHOOSING LINUX


Choosing Linux - direct costs

Hardware
Linux will run on nearly any of Intel's family of x86 processors (and clones), from the 386 to the Xeon and beyond. It also runs on a variety of other architectures including Alpha (DEC, now owned by Compaq) and SPARC (Sun), AMD and is being ported to even more platforms, large (IBM's S/390) and small (3com's Palm Pilot).
Choosing the correct hardware is really a balance of the server's intended purpose and how much you want to spend. Although the Library's first production servers ran on desktop class or older "recycled" hardware, most have been upgraded to server class hardware. They have become too important during daily operations to run the risk of having them down because of hardware failure. Some things to keep in mind when choosing hardware:
Software - The Red Hat conundrum
When we began using Red Hat Linux in 1998, boxed sets could be purchased for between $50 - $150, or you could download ISO disk images for free. Package updates were accomplished by downloading the RPM files from Red Hat's errata website and installing them. Red Hat streamlined this process by starting the Red Hat Network, which offered easier to use package updates for a small fee - $60/year per server. Quite affordable, so we continued purchasing at least one of every new boxed set and added subscriptions to Red Hat Network for each of our servers.

In 2003, Red Hat significantly changed their product line, and Red Hat Linux 9 was the last of the retroactively dubbed "community" releases. When Red Hat announced the Red Hat Enterprise Linux product line and the associated costs we nearly went into shock, as did a lot of other loyal Red Hat customers/users. There was a huge flurry of commentary (read: confusion, anger & cries of "sell out") on Red Hat related lists, slashdot and many other places. The bare minimum for a server version of Red Hat Enterprise Linux (RHEL) was $349/year per server. That's a 581% jump in annual cost as compared to Red Hat Network. Ouch! RHEL is a subscription service, and as such subscribers are entitled to package updates and new versions RHEL, but the baseline versions don't include support beyond 30 day basic installation & configuration support. For that type of Service Level Agreement (SLA), you need to step up to the standard or premium support package. We weren't interested in a SLA, just package updates and new releases, but $349/year per server still seemed like a lot of money.

For a comparison of the RHEL line see Red Hat's comparison chart and System Configuration Limits. For support options and subscription costs, see Server support options & pricing and Client supports options & pricing. One thing I found confusing at first was the differing products & support options. There are four products (Workstation, Desktop, ES & AS) and three support levels (basic, standard & premium). Workstation & Desktop are similar, with Desktop designed for large corporate installations. ES & AS are for servers and include the same packages, but ES is designed for smaller servers and has processor and RAM limits. Within the products you can choose whatever support option you want, although not all support options are available for all products. Confused yet? I was at first.

The obvious question is, why did Red Hat change their product line and costs so drastically? Thoughts & opinions differ, some of mine are:

The big question is, where does that leave organizations that can't justify (or afford) the price jump? Linux and open source are all about choice, and here are some:

Now that you know some of the available options, which one did we choose? Well, we're doing a variety of things:

Index


Choosing Linux - indirect costs

Services
Most of the services we wanted to provide were available "out-of-the-box". Those that didn't come with the distribution were available from the Internet. Other operating systems either didn't provide all the services we wanted or were only available at an additional cost.
Time (the learning curve)
Yes, it is Unix and it does have a steep learning curve, but I felt it was well worth the effort. If you already know one flavor of Unix, learning another isn't that difficult and since I was already trying to learn Linux to teach myself more about Unix in general, this gave me a practical reason for doing so. The time required to get proficient with Linux really depends on the person learning it - your experience will almost certainly be different from mine. One thing that helps is breaking the task down into smaller, more manageable chunks - something that is relatively easy to do. Pick a service and configure it. After the first service is up and running, pick another and work on it. This will help you get comfortable with Linux, gain some experience and build on the knowledge acquired from earlier projects.
Learning materials
Although there are a wide variety of man pages, FAQ's, Howto's, web sites and other documentation available for every aspect of Linux, sometimes there's just no substitute for a book. I've read several of the O'Reilly "animal" series, titles by other publishers and the user's guides that come with the distribution.
Pre-production server hardware
It's always a good idea to have a pre-production server around to experiment on before rolling changes out to a production server. Although RPM makes it easy to revert to an older version of a software package, it's a bit tricky after you've upgraded the entire server to the vendor's latest release. Individual services may have undergone major changes, which will sometimes necessitate a new configuration file structure. The new release may also have new features that are worth investigating. A pre-production server can also be useful for testing out a new service you're planning on making available. The hardware for the pre-production server doesn't need to be anything fancy - an old desktop PC will generally do nicely.
Remote system administration
With servers at 2 different locations, remote administration is a must, and Linux fits the bill nicely. All administration can be done remotely from the shell prompt, although there are some graphical (GUI) administration tools as well. Most people prefer one or the other for system administration, you can read my musings on the subject here if you'd like. I routinely perform the following administration tasks remotely from the command line:
Ongoing system administration
While many of the routine housekeeping tasks are performed either automatically by Linux or shell scripts, there are obviously tasks that require human intervention:
Daily tasks (approx. 5 minutes/server):
Weekly tasks (approx. 10 minutes/server):
Monthly tasks (approx. 15 minutes/server):
Other tasks, performed as needed (time varies depending on the task):

Index


Choosing Linux - stability & performance

Stability
Linux has proven to be an extremely stable server OS. The old Gromit ran continuously from February to December 1998 with only 3 minor interruptions: an extended power outage, a physical move of the server and to install some additional hardware. The current continuous uptime record is held by Mrs-Tweedy - 757 days and counting!
Knock on wood, since the Library's first Linux server went into production in 1998, I've only had one software problem that required a reboot, which was probably my fault. I was moving the server from a 10 megabit switch to a 10/100 megabit switch (and back - the new switch decided not to work) and had forgotten to properly bring down the network interface before doing so. I think the TCP/IP stack got confused and couldn't decide if it was supposed to be operating at 10 or 100 megabit. Even so, the server managed to limp along until closing time. I can't ask for better reliability than that!
Performance
Linux performs well on most hardware, including older hardware. It uses the CPU and RAM efficiently, has one of the fastest TCP/IP implementations available and frequently outperforms other operating systems on the same hardware.
You can compile your own kernel (not as difficult as it sounds) to add or remove support for specific hardware or services, thus making the kernel image smaller and more efficient. It's also possible to tweak specific settings, like memory management, if you have a service that's a memory hog.
I hesitate to quote any of the many benchmarks floating around on the Internet. Benchmarks are frequently biased in some way and the same set of data is sometimes interpreted in different, often contradictory, ways. From personal experience I can say that our servers have been up to whatever task we've thrown at them.
Rebooting
There are only a few circumstances when Linux must be rebooted: after upgrading to a new release, after compiling a new kernel, to replace/install hardware and of course after an extended power outage. While frequent rebooting may be a necessary evil on the client end, my philosophy is rebooting the server should be a rare event, something that Linux seems to agree with.
The Kernel & other processes
Very few things can crash the Linux kernel, faulty hardware being the #1 culprit. I have had services crash, generally due to misconfiguration on my part (oops!), but fixing the configuration and restarting the service is all that's been necessary.
Shared libraries
Like most other operating systems, Linux uses shared libraries (similar to Windows .dll files) to reduce the size of compiled binaries (programs) and provide a standard set of functions and procedures. Unlike some operating systems, the only time these shared libraries are changed is when you either (a) upgrade to a new release of Linux, which generally includes new shared libraries or (b) specifically upgrade them. Regular software packages that are linked to these libraries do not arbitrarily overwrite shared libraries or install their own versions. This prevents a newly installed piece of software from breaking others or having to install software in a particular order to get everything to work.
Software "bit-rot"
Although it may take a bit more work to setup, once a Linux server has been properly configured you can run it until the hardware croaks without ever having to re-install the base operating system. Some operating systems can suffer mysterious performance slow-downs and stability problems after being installed for awhile, sometimes requiring a complete reinstallation of the operating system. Linux will keep on running regardless of the number of software packages added or removed.
Disaster recovery
With a good backup routine and a little preparedness, it's possible to completely restore a crashed system to the state of the lack backup without having to reinstall the operating system. For more information, see Restoring Feathers from the dead. In Feather's case the data was stored on another server, but the same would have been true had the data been on tape or other media.

Index


Choosing Linux - support

One question I've been asked while presenting is "What would happen to the Library's Linux servers if you left?" A very good question and one that was more difficult to answer when we first began using Linux. At the time, Linux was still relatively unknown and support options were limited. Some distribution vendors provided installation support and maybe limited initial configuration support, but that was about it. Today Linux is growing rapidly and there are many more support options. Many distribution vendors are happy to sell you a support contract, including whatever SLA (Service Level Agreement) you need. Everything from basic installation and configuration to custom programming/data services and Linux migration roll-outs. Hardware vendors like HP/Compaq, IBM, Dell, Gateway and others are now on the Linux "bandwagon", offering pre-configured systems and support contracts for Linux running on their hardware. There are also companies and independent Linux consultants supporting Linux regardless of platform or distribution. If I left, a short term solution could involve using the City's IT staff in conjunction with a short term support contract from a vendor or working with a Linux expert, either locally or via e-mail. A long term solution would (obviously) be to hire a Linux system administrator to take my place.

That said, commercial support has never been an issue here. It is nice to see support options becoming available for (a) small agencies who can't afford or don't have a resident expert and (b) IT departments, who although may already have Unix/Linux expertise on staff, are required by management to have a support contract.

Don't think that just because commercial support is now available that it's something you must have. To quote an anonymous Linux user - "There's a bordering-on-clinically-interesting level of support from the Linux community at large." There are many Linux user groups, listservs, and gurus who are more than willing to help other Linux users. Whether you join a listserv or use e-mail mentoring, you can count on a solution from the Linux community.

My best sources for support and information include:

For a list of some Linux resources, click here.

Index


Choosing Linux - updates & source code

Software updates:
Software updates, especially security related ones, are released in a timely manner (sometimes days or even hours after a problem is discovered) via the Internet. Many vendors of proprietary operating systems release updates quarterly or less often. Sometimes they are reluctant to even admit the presense of bugs, especially security related ones. Its often been said that "Security through obscurity is no security." That's never been more true that in today's world of the ever-expanding Internet.
Software updates are also released as individual packages rather than one big bundle. This allows you to pick & choose which packages get updated. "All inclusive" updates from vendors of proprietary operating systems may include fixes for things you don't have or install things you don't want. Others can even break functioning software. In the event an updated package does cause problems, RPM makes it easy to revert back to the older package.
Software package management with RPM:
RPM is a software management utility created by RedHat that has since been adopted by other distributions of Linux. It makes software installation, upgrades and even removal quite easy. Other distributions that do not use RPM generally have their own software management utility.

One thing that makes RPM especially useful is that it includes all installed packages, not just operating system related ones. Update mechanisms offered by other vendors sometimes include only the operating system and drivers, which makes keeping these systems up-to-date a multi-step process. Use one tool to update the OS, search a website to download updates for other applications. Time consuming, and at times, frustrating!

Source code:
Source code is available for all open-source, GPL'ed software included with a distribution. This can be useful if you discover a bug, want to make changes or just practice your programming skills.

Index




PART V: SECURING LINUX


Security is something that frequently gets overlooked regardless of the operating system the server happens to be running. Because Linux is sometimes considered the "plaything" of Hackers and college students, it has an undeserved reputation for being insecure. Although older versions of Linux often had insecure services running by default, newer ones are much better and often include the option to configure a firewall during installation.

Package updates:
Probably the #1 reason servers get "cracked" (broken into) is because system administrators don't keep up with software package updates. Nearly all Linux distribution vendors have websites that list updated packages. While packages that fix minor bugs or add new features are optional, updating packages that fix security related problems is a MUST, especially for servers that are used by the public. With package management tools like RPM to make life easier, there's no excuse for not updating critical packages.
NOTE: The term "Hacker" and derivatives like "hacked" have been given a negative, even sinister, connotation by the popular media. There are many variations to the meaning of the word, most of them shedding a positive light on the term. According to The New Hacker's Dictionary one meaning is "A person who enjoys exploring the details of programmable systems and how to stretch their capabilities, as opposed to most users, who prefer to learn only the minimum necessary." (See the dictionary for additional meanings.) Call any respectable longtime system administrator, programmer or computer geek a Hacker and he'll probably take it as a compliment. Most Hackers consider those who try to break into computer systems to be "Crackers" or "Script Kiddies", also listed in the dictionary. Since people on both sides seem to like the term, some have resorted to referring to them as "White Hat" or "Black Hat" Hackers.
Least privilege:
Least privilege is another good way to help make a server more secure. Rather than denying activities you don't want and allowing everything else, the concept of least privilege states "allow only these specific activities, deny everything else." A good example is shell (often via telnet) access. Just because Linux can provide shell access to all staff, does everyone really need it? By denying shell access to everyone but those who need it you close a potential security hole.
Proper configuration of running services:
Services that have many configuration options are a good place for potential holes to exist. If unsure, find some documentation (man pages, FAQ's, Howto's, books, etc.) that explains it or ask - there are many Linux-related websites, newsgroups & listservs where you can post a question.
Disabling and/or removal of unnecessary packages & services:
Just as proper configuration of running services is important, why run services you're not using? With RPM and other package managers it's trivial to remove an unused package. You can also re-install the package later if you discover a need for it. When removing the package is not an option, disable it or deny access to it. Telnet is a good example of this. While outbound telnet is often useful incoming telnet is disabled on all of the Library's servers. By making a small change to /etc/inetd.conf, inbound telnet is disabled.
Using the Linux firewall tools protect a server:
While Linux firewall tools like ipchains and iptables are frequently used on a firewall server to protect an entire network, there's no reason why they can't be used to protect an individual server. Libraries are a unique example of why this is useful, because there are typically both staff and public PCs on the same network. A traditional firewall only protects PCs and servers from attacks originating from the Internet. But what about those publicly accessible PCs that are already inside the firewall? By using the firewall tools on each server you can help protect your internal servers from harm while still allowing legitimate access.
Logfiles & monitoring tools:
During normal operation a Linux server will generate quite a number of logfiles. There are automated tools that will summarize these logfiles and even alert you based upon criteria you've chosen. Perusing the logfiles periodically to get a "feel" for what your server is doing is also a good idea.
Backup, verification & recovery:
While the importance of having (and following!) a good backup routine cannot be overstated, verification and recovery are important too. Periodically pick a tape at random and restore several files to a temporary directory. Compare them to the ones on disk to be sure file are really being backed up. If they don't match, find out why. Has the file changed since the backup or was the file backed up incorrectly? Be comfortable with the backup software's recovery options in the event you need to use them. While you're trying to recover from a disk failure is not a good time to learn the nuances of your backup utility.
Uninterruptable Power Supply (UPS):
A properly configured UPS and monitoring software will not only provide protection against momentary power outages, but extended ones as well. Software is available to shut down the server cleanly during an extended power outage and reboot it once the power is restored. Test the functionality of the software on a pre-production server if possible just to be sure it works properly.
Windows viruses, worms & exploits:
With all the Windows viruses, worms & exploits "du jour", it's nice to run a server operating system that's immune to all of them. This isn't to say that Linux is exploit-free (no OS or application is), but because Linux was designed with multiple users and network connectivity in mind it is much more secure. When a serious flaw is discovered in Linux, the problem is generally fixed quickly - often within hours or days.

Index




PART VI: WHAT WE USE LINUX FOR AT THE LIBRARY


Domain Name System (DNS)

The Domain Name System or DNS is the Internet "phonebook" of hostnames & IP addresses. Anytime you connect to a computer on the Internet using its host address, DNS provides the translation from the hostname to the corresponding IP address.

Index


Samba (network file & print services)

Samba provides logon, file & print services, much like Windows or NetWare. It supports domain logons, logon scripting and a "browse" list of available shares. There are many access control options, both system-wide and share specific.

All of our client machines are currently Windows 2000, and the requirements for them are simple - TCP/IP networking and the MS "Client for Microsoft networks" client, both of which are included with Windows 2000. The NetBEUI protocol is NOT needed or helpful.

Domain logons & scripts:
Samba supports domain logons by username or machine name. Logon scripts are written as DOS style batch files, with the initial script often calling a series of "service" scripts. This method simplifies administration when changes are needed - just edit the service script instead of each user's individual script. Logon scripts typically perform the following functions:
Windows roaming profiles:
Samba also supports Windows roaming profiles. While it's nice to have your desktop "follow" you around, roaming profiles can get quite large if not managed properly. (We learned this the hard way and are working to correct it.)
CD-ROM databases:
At one point we used Samba to serve a number of CD-ROM databases. Since nearly all of our databases have moved to the Internet, we no longer need Samba for this purpose.
Although we have a number of multimedia PCs available in the children's area, multimedia & DVD CD-ROM's are not served from the network as they tend to be bandwidth hogs.
Server shares:
A "share" is merely a directory on the server that is accessible from a client PC, via a mapped drive letter or UNC (Universal Naming Convention) path.
Network printers:
Samba also provides access to network printers. Access to these printers can be configured by user login, group or individual PC. Although some printers are not directly supported by Linux if the client OS has a driver for the printer, Samba and Linux will cheerfully spool print jobs to it. We have actually phased out printing via Samba, switching to printing via CUPS. As we were installing SAM, a PC time management / print cost recovery solution, we had some difficulty getting it to work with Samba printing.

Index


Apache (Internet web server)

Apache is the most widely used web server software on the Internet, and we use it at the library to host a variety of web pages:

Index


Internet filter & cache (Smart Filter & Squid)

Smart Filter is a commercial Internet filter supported & developed by Secure Computing. It uses Squid as the proxy/cache engine and is a good example of combining Open Source software with commercial software. We use it in conjunction with SAM, a PC time management / print cost recovery system. PCs in the children's area are always filtered, minors are filtered regardless of the PC they use and adults can choose filtered or unfiltered access. Although there are other solutions available, many open source (and often free), we wanted to insure compliance with federal CIPA (Child Internet Protection Act) guidelines as well as Colorado's own laws.

Index


Dynamic Host Configuration Protocol (DHCP)

Dynamic Host Configuration Protocol or DHCP is a way to configure a PCs TCP/IP settings during startup, including IP address, hostname, domain name, default gateway, DNS servers, WINS servers and more. Note: Although hostnames can be changed with DHCP, NetBIOS (computer) names cannot. There are other ways to change the NetBIOS name remotely and I recommend the hostname & NetBIOS name be the same. It just makes life a little easier. ;-)

For security reasons and to aid troubleshooting, we statically assign IP addresses to all PCs using a MAC to IP address map in the DHCP configuration file. Since all PC configurations are the same upon restoring a PCs image, using DHCP reconfigures most of the network settings on reboot. In the event of a change in domain name or router failure, these settings can be changed on the server and propagated to the PCs by rebooting them.

Index


Automating tasks with cron

An ongoing project has involved trying to automate some of the more routine system administration tasks. After all, what good is having a computer or two if it can't do some of the more mundane tasks for you? Some of the tools I've used so far include:

Write a shell or Perl script, schedule the script using cron and you've got an easy way to automatically complete routine tasks. Some automated tasks include:

Index


Internet E-mail

One of the major uses of the Internet is e-mail, and we use Postfix for delivery. It is fast, easier to administer than sendmail and secure. It is designed to be sendmail compatible, so in most cases you can use it as a drop-in replacement, which is what we did. While sendmail is a monolithic application with a difficult to learn (at best) configuration syntax, postfix is broken up into smaller modules and uses a "plain english" configuration syntax. The City & College provide Exchange/Outlook accounts for staff, so our use of Postfix is somewhat limited:

Index


Running listservs with Mailman

Mailman is mailing list software similar to Majordomo, listserv, smartlist and other Internet mailing list (aka "discusion list") software. Each list has its own informational, archival and administrative web pages. The administrative pages allow the list owners to maintain and customize nearly all aspects of the list without resorting to e-mail commands or bugging the server's administrator. ;-) Although many search portals like Yahoo! will allow you to run your own listservs, it's nice to have the mailing list addresses be a little more "official".

Index


Rsync (file & directory synchronization)

Rsync is kind of a cross between the Unix rcp (remote copy) program and ftp. Rsync can run from the command line, as a daemon (service) and can also use ssh as the transport protcol for extra security. Rsync is much more flexible and generally faster than either rcp or ftp, is easy to run unattended and can make an exact copy of a directory structure, including ownership, permissions and timestamps. This comes in handy when you want to synchronize data between multiple servers. Rsync is currently used to keep the following data in sync, often automatically via cron:

If you're getting the impression that rsync is an incredibly useful tool to have, you're right!

Index


Network Time Protocol (NTP)

Network Time Protocol (NTP) provides an easy, automated way to keep the time synchronized between devices (PCs, servers, network equipment, etc.) on a network. NTP servers are arranged in a hierarchical fashion, with each layer called a "stratum". A stratum 1 NTP server is directly connected to some type of highly accurate clock. A good example would be the official United States time kept here, maintained by the National Institute of Standards and Technology (NIST). A stratum 2 NTP server receives time from a stratum 1 server and so on.

Index


Trivial File Transfer Protocol (TFTP)

Trivial File Transfer Protocol (TFTP) is similar to regular FTP except there's generally no authentication (username/password) involved. Although it was designed for quick & easy transfer of small files (hence the name trivial) many times the data being transferred is hardly "trivial". We use it for:

Index


Calcium web scheduling software

There are a number of meeting rooms available at College Hill Library and when we first opened in 1998 scheduling these rooms was done on paper and worked reasonably well. As more groups began using these rooms, it became obvious that the paper method was lacking in both access and efficiency. A centralized notebook worked but was difficult for more than one person to maintain and each day's schedule had to be copied and distributed to various service desks in order for staff to direct patrons to the correct meeting room. The daily copies were often outdated shorty after being printed, not to mention the waste of paper. The web is an ideal place to put information you want to make available to a wide audience and our room schedule calendar was no exception. The College was demoing some software for similar scheduling needs but in addition to being somewhat expensive, it lacked web access. We wanted everyone (staff & patrons alike) to be able to view the calendars from a web browser. We also wanted staff responsible for the room scheduling to be able to edit the calendars via a browser without requiring any additional software.

Unable to find suitable software for the moment, I created a set of ugly but functional templates for staff to schedule rooms. Each week was a separate html file, updated using a simple editor. Although primitive, it at least made the room schedules available to staff and patrons via a web browser. We continued this way for some time until it became obvious that the rooms were getting even more popular and our scheduling "system" was badly in need of an overhaul.

Enter Calcium from Brown Bear Software. It slices, it dices, it makes thousands of Julian...no wait - that's another product entirely. Although commercial software, the vendor is easy to work with and $500 for the entire package was very reasonable. Some of Calcium's features include "master" calendars, pop-up windows, grouping & coloring, e-mail confirmations, e-mail reminders and searching/filtering. Calcium is written in Perl and can be modified for local use if desired. You can view our room schedule here. Although not without its own quirks, I think the only way I'd be able to take Calcium back would be to pry it from staff's cold, dead hands! ;-)

Index


Secure communication with OpenSSH

Applications like telnet, rsh, ftp & rcp typically send the username/password in plain text. This is bad enough on a LAN with public computers on it, but it's completely (IMO) unacceptable for remote system administration across a public network (like the Internet) because you never know who might be listening with a packet sniffer. In its most basic form, ssh is a secure replacement for all of these. All data is encrypted, preventing sniffing of the authentication process (username/password) and session data. ssh is quite useful for other tasks as well and some of the things we use it for include:

Index


Remote logging with syslog

Most network equipment (hubs, switches, routers, firewalls) generate logfiles but often they have only a small buffer to store these messages. syslog provides the ability to collect these messages on a central server and store them indefinitely. We use syslog to collect logging messages from our network gear. These logs are analyzed periodically and kept as a record of network traffic.

Index


Perl programming

No discussion of Linux would be complete without Perl, although this one was added it bit late. ;-) Although Perl is widely used for system administration scripts, dynamic web pages, database manipulation, text processing, etc, etc, etc, I didn't begin using it until we migrated from Dynix to Horizon in May of 2003. My initial use of Perl is covered in the next section as it involved more than just Perl, but here are a few of the things I've accomplished with Perl:

Preparing Front Range Community College (FRCC) student records for import into Horizon:
Twice a semester the library receives a file of students registered for the upcoming semester at FRCC. Horizon includes a utility (bimport) for importing borrower records en-masse, but requires the records to be in a specific format. The data we receive isn't readable by bimport and may contain invalid characters, which causes bimport to die a horrible screaming death. Since most students register for more than one semester, the file also contains many potentially duplicate records. I needed a solution that could:

My SIS to bimport conversion script currently does the following:

  1. Prompts the user for borrower type, record type, campus, expiration date and source file.
  2. Sets defaults for assorted Horizon fields like phone type, age group and address type.
  3. Reads records from the source file, cleans up invalid data and puts useful fields into variables.
  4. Creates a hash array of Horizon city codes & descriptions. Attempts to match the spelling of the city in the source record with a Horizon city code. If unable to match, the city is used as is.
  5. Creates a PIN based on the phone number, which is required for access to various library services.
  6. Queries the Horizon database for existing records based on student ID and/or social security number. If a match is found, an update record is created and written to the output file. If no match is found, a new record is created and written to the output file.
  7. Summarizes the record counts (total records processed, new/update records, match/no match on city code) for the user.

It's a little more complicated than that, but those are the main features. Fields included in the output file are:

I can't imagine how we'd ever import records into Horizon without Perl! This ability became even more important when the College's online learning program grew significantly. They wanted to offer their online students access to some of the online databases we have at the library. We suggested using Horizon Remote Patron Authentication (RPA) for this purpose as it was something we already had. RPA uses Horizon records for determining what (if any) remote databases someone is entitled to use. Since I had already written the necessary Perl code to create records for bimport, it was a matter of making a few modifications to my program to handle the new student type so RPA would know what kind of authorization to grant. We receive between four and six files of online student & instructor records per semester and are able to get them loaded into Horizon quickly.

One of Perl's many strengths is the enormous amount of code, often in the form of modules, contributed by Perl hackers worldwide. Perl would not have been able to talk to Sybase without the modules written by Tim Bunce (DBI) and Michael Peppler (DBD::Sybase), so thank you very much! And of course thanks to Larry Wall for creating Perl in the first place!

Horizon bibliography & reading list data extraction:
During the years we were on Dynix, I was able to purchase and write a number of tools useful for extracting data from our system. The migration to Horizon was a huge change in database engines (from UniVerse to SQL) and of course none of my tools were any good anymore - back to square one, I guess.

The next thing staff wanted was a way to produce printed bibliographies and reading lists, but the Horizon client wasn't able to pull together all the data required. The overall design philosophy of SQL dictates table structure and ultimately spreads the data over a number of tables. Add to that the complexity that is Horizon and the weird constraints of storing MARC (MAchine Readable Cataloging) records and you wind up with a large number of tables and data that isn't necessarily legible (or useful) in raw format. There are a number of SQL tools available, but the ones we had access to were either difficult for the novice to use, seemed better at summarizing information than displaying details or both. I needed a script that could:

I already had code available to connect to the Sybase database and I was beginning to learn my way around the Horizon table structure. I had a pretty good idea how to go about getting the data I wanted, but what I needed was a way to deal with "processed" fields, where the data isn't legible without "unprocessing" it (for lack of a better term). Once again another Perl hacker was able to provide the missing code, and my thanks go to Paul H. Roberts of Alpha-G Consulting, LLC. Currently my script does the following:

  1. Reads through a file (or several files) of Horizon BIB numbers, discarding non-numeric data.
  2. Performs a basic sanity check to see if the BIB number read actually exists in Horizon.
  3. Gets the title. This may sound simple, but is actually quite complicated. The title is stored in differing locations depending on length, and may include processed data. There's a lot of things to check and quite a few to cleanup just to produce a legible title that can be sorted without leading articles getting in the way.
  4. Gets the author. Once again it sounds simple but isn't. Author information isn't stored directly in the MARC record, but rather in an authority record, which the MARC record is linked to. Therefore it's necessary to get the author from the authority record. More checking and cleanup to produce reasonably legible author information.
  5. On to the item information extraction, which includes barcode, location, collection code, call number, item status, etc. More cleanup including processed fields and fields that need to have their code translated to the corresponding description.
  6. Finally the record is written to the output file as a series of tab-separated fields.
  7. The file is imported into Excel or Word by library staff members, where they can remove unwanted fields & records, sort the report, and do pretty much whatever else they want with it.

The reports have been used to create "bookmark" reading lists for patrons, shelf lists for pulling items and shared with smaller Colorado libraries as topical reading lists. Many of the smaller libraries lack an automation system capable of complex subject searches or simply have no automation system at all.

In trying to keep with Perl tradition, I wrote much of the code as functions and placed them in a Perl module. This lets me centralize code that I might use again and makes it available in the event anyone else wants it. The program is capable of processing a large number of records in a short amount of time, certainly much faster than having to extract the data manually via an SQL tool. At present the program only works with Horizon BIB numbers but I have plans to expand it to work with barcode numbers.

Index


Web development with LAMP

LAMP is an acronym for Linux, Apache, MySQL and Perl (although the M & P can mean other things). I became acquainted with LAMP during our migration from Dynix to Horizon. We discovered during the migration that BIB (title) and holdings (item) use statistics wouldn't be carried over to Horizon, something collection development staff were rather unhappy about. My (evil) plan was to investigate the possibility of dumping our Dynix BIB & holdings use statistics into a MySQL Database and then using Perl CGI pages to search the database by BIB number or barcode and display the results as a web page.

My first stop was the book Open Source Web Development with LAMP by James Lee and Brent Ware. On the cover is a Swiss Army knife and the authors explain "A Swiss Army knife contains many useful tools, but most people only ever use the knife & screwdriver. Our purpose in writing this book isn't to teach you all the nuances of any of the topics we cover, because there are already plenty of books available for that. Following the 80/20 rule, our goal is to teach you the 20% of commands you'll use 80% of the time while including pointers to more in-depth reading." I highly recommend this title for anyone considering web development using Open Source tools.

Dynix "Stats-O-Matic"
After reading the LAMP book, I decided my plan was workable and so Dynix "Stats-O-Matic" was born. The first step was extracting the title, author and use statistics from Dynix and cleaning them up. I won't cover the details here because it was a time-consuming and ugly process. The available UniVerse data extraction tools were pretty decent, but very slow and there was a *lot* of data to extract.

The next step was loading the data into a MySQL database which was *way* faster than extracting it. With the data available in a MySQL database, I needed to tackle the Perl CGI script. The search page is a simple form that accepts user input and hands the data off to the CGI script, which does the following:

  1. Rule one - Never trust user input! Validate user input - check for no data, both BIB & barcode entered, and non-numeric input. If the data is still ok, proceed. If not, generate an error page.
  2. If a barcode was entered, get the BIB number. If the barcode or BIB is not found, notify the user.
  3. Select title, author and various use statistics from the MySQL database.
  4. Create an html report include BIB number, title, author, BIB use statistics, item information and item use counts. Hand the report back to the browser.

That's it - sounds simple but it was my first LAMP project. It was a *lot* of work and even more trial and error. You can see the fruits of my labor here if you'd like. (With appropriate BIB & barcode title results, of course!)

Horizon Technical Services "Requests-A-Mundo"
Success with earlier Perl and LAMP projects gave me the experience & knowledge I needed to tackle the next Dynix to Horizon migration gap. When we were on Dynix, I routinely created a list of on order items that had requests (holds) on them. Technical Services staff used the list to locate these items so they could be processed, cataloged and delivered to the waiting patron quickly. There was no such report available in Horizon and I wanted to (a) create one that was better than the old Dynix report and (b) could be updated automatically.

The information Technical Services wanted on the report included PO number, title, author, item status and BIB number. I needed additional fields to select the correct items, translate status codes into descriptions and whatnot, so I knew there would be quite a few tables involved. TS Requests-A-Mundo performs the following functions:

  1. Populates a hash array of collection codes to descriptions and an array of status codes to descriptions.
  2. Gets all unique BIB numbers from the Horizon requests table.
  3. Gets all item record numbers for each BIB number.
  4. Gets a variety of fields from various Horizon tables, including: BIB number, barcode, collection code, call number, item status, PO number, author and title.
  5. Inserts item record information into a MySQL database.
  6. Gets & sorts all unique PO numbers from the MySQL database where the item's barcode is a "fake" (on order) barcode. Since all requests were put into the MySQL database, this process selects only those PO's where items are still on order. Older PO's won't have any items left on order, but may still have requests. I didn't pre-select only items on order because I thought I might want to have all items with requests available for any future reports.
  7. For each PO, select only those items that are still on order. We may have received a partial shipment, so some copies would already be in patron's eager hands while others are still on order.
  8. Creates and writes an html report file.

The report is updated twice a day automatically by cron, and Technical Services is very happy to have it. It includes more useful information than the old Dynix report and I don't have to create it manually. You can see the current Technical Services Requests-A-Mundo report here if interested.

Horizon Public Services "Requests-A-Mundo"
During the Library's 2005 Summer reading program, I "broke" the Horizon utility staff had been using to create a report of display items with requests on them. As I dug deeper into the problem, I discovered that the utility was actually now working correctly and by putting it back the way it was I would be creating more problems than I was solving. Staff still needed a way to create a report of these items but I didn't want to risk further problems by restoring the utility to its original state. Fortunately I had written the Technical Services Requests-A-Mundo report using two scripts. The first script collected the data and put it in a MySQL database and the second script created the report. I added a number of new fields to the MySQL database schema and modified the data collection script to gather the additional information needed to produce a second report for Public Services staff. This report is quite different from the Technical Services report, consisting of items already in the system rather than those on order. Both repors share a common database, the PS report selects items for the report using different criteria:

  1. Selects items from the request database containing a specific location and display status code. Each location has its own report and the items are grouped by display status.
  2. Counts the total number of requests and subtracts copy-specific requests and suspended requests. If the number of requests is still greater than zero, the item is added to the report.
  3. If there are items to report, creates and writes an html report file. If there are no items to report, creates and writes an html report stating "No items to report."

Like the TS report, the PS report is updated twice a day automatically by cron. The reports contains more information than the one previously created by Public Services staff, which was somewhat labor intensive and error prone. You can see the current Public Services reports here if interested.

Does all this Perl programming qualify me as JAPH (Just Another Perl Hacker)? I'm not sure if I've gained enough experience for that title just yet, so perhaps I'm still JAPN (Just Another Perl Novice). ;-)

Index


VMware virtual servers

VMware is software that creates multiple virtual machines on a single piece of hardware. This allows one piece of hardware to do the work of several servers. VMware consists of three main parts:

VMware comes in three flavors, depending on your needs:

When we migrated our ILS (Integrated Library System) from Dynix to Horizon, we were on the verge of drowning in hardware. Things that used to run on the database server or in combination with other services now seemed to need their own server. Rather than putting each piece of middleware on its own small server, we decided to use VMware GSX to reduce our hardware needs, control server growth and (hopefully) make supporting all these middleware servers easier. We purchased one copy of VMware GSX server and installed it on Preston. Initially Preston hosted 4 virtual machines and we later added 2 more. We were so impressed with VMware we decided to further consolidate our server hardware by upgrading an existing server and installing VMware on it. In 2006, we replaced a server at Irving Street and elected to make it a 3rd VMware host. Having VMware running on three servers allows us to balance the load of our virtual machines and provides redundancy in the event of hardware failure. Having VMware servers at two locations also provides some basic disaster recovery capabilities. Preston, Shaun and Wendolene currently host the following services & virtual machines:

Additionally, there are a number of other, non-production virtual machines running on our servers. We use them to experiment with new software, test major system upgrades (like Horizon & HIP), make modifications to other systems (like Bugzilla) and for training. Staff must attend a one-hour class before they begin using Bugzilla, so we copy the production VM, rename it Trainzilla and use it for training.

Overall we've been very happy with VMware - For us VMware is technology that's way cool and useful! As with any software there are pros and cons, here are some to consider before getting started:

Pros:

Cons:

The VMware website includes product information & documentation, FAQ's, a good knowledge base and user forum. VMware Server is free, so try it already!

Index


Firewalling with iptables

Linux firewalling tools have come a long way since the days of ipfwadm. The current tool, iptables, is a full-featured firewall rivaling some commercial offerings. In fact, there are some commercial products based on iptables. Some of the features include:

I've already mentioned the library's two firewalls; Mr-Tweedy & Mrs-Tweedy. Both use iptables and they firewall the following networks from each other:

In 2006, our network configuration will become much more complex. In addition to collapsing the staff/server networks between the two facilities, we'll also be moving the public PCs to their own network and adding a wireless network for patrons to bring in their own laptops. Prior to beginning this project, my preferred method of firewall creation was to use a shell script with lots of variables. This method was already becoming increasingly cumbersome to maintain across multiple servers and I wanted a better option for managing multi-network firewalls. Firewall Builder is a GUI for creating & managing firewall configurations. It lets the user focus on the rules instead of the syntax by abstracting hosts, firewalls & services as objects. It can create rulesets for a variety of operating systems & firewall tools, including: Linux (ipchains & iptables), OpenBSD (PF) and Cisco (PIX). I now use Firewall Builder to manage firewall & individual host iptables configurations.

Another handy tool to have, is IP Tables State (iptstate), which creates a "top-like" display of active connections through the firewall. Some distributions now include iptstate.

Index


CUPS printing

Unlike printing via Samba, CUPS printing uses no Windows authentication, making it easy to use from Windows or Linux. CUPS supports a number of printing methods, including LPR, which is what we use. LPR is an older Unix printing service, but Windows clients can be configured to use it as well. We switched to LPR printing via CUPS after discovering that our public PC time management / print cost recovery system was having difficulty printing to network printers via Samba. We also use it to print from Windows servers that aren't part of our Samba domain.

Index


Revision control with RCS

As the name indicates, RCS is a system for managing multiple versions of files. The ".bak", ".old", ".older" & ".save" method of preserving files can get confusing and out of sync in a hurry - which version of the file did you really want? RCS alleviates this problem by storing the file in a way which can display version differences line-by-line as well as user input notes. When you "check in" a file, RCS prompts you for text describing the changes and increments the file's version number. Decide you need to start over with the last working version? Simply use RCS to check out an older version. Checking out the file for editing locks it so others can't change it. Checking out a file without locking it makes it available for use read-only. I use it primarily for firewall scripts and Perl programs, although I should be using it for configuration files as well. Would this web page be a good candidate for RCS? Probably so, although if it gets much bigger I'll have to consider moving it to Wiki format.

Index


Bugzilla

Bugzilla is an industrial-strength bug tracking system. Bug tracking systems allow developers to keep track of outstanding bugs in their products effectively. Most commercial defect-tracking software is expensive. Despite being "free", Bugzilla has many features its expensive counterparts lack. Consequently, it has quickly become a favorite of hundreds of organizations across the globe. As of September 2006, 571 known companies and organizations worldwide use Bugzilla, including: Mozilla (Netscape & Mozilla browsers), the Linux kernel developers, NASA, Red Hat, the Apache Project, and Id Software.

Although Bugzilla is designed to track bugs in shipping products, it works well for tracking work requests of all kinds, which is how we use it at the library - to better track requests & issues submitted to Automation Services. Trying to track requests via paper, e-mail, voicemail, sticky notes, written lists and other means was proving increasingly difficult, not to mention frustrating for all staff. Bugzilla provides a web-based method of entering, tracking, updating and resolving issues submitted to Automation Services staff. All non-emergency requests to Automation Services are handled using Bugzilla. In retrospect, Bugzilla is one of those things we wish we'd installed much earlier. During our search for a solution, we reviewed a number of products before choosing Bugzilla.

Products reviewed:

Reasons for using a ticket tracking system:

Accountability, responsibility & process transparency for all staff:

Once we had decided on Bugzilla, our first thought was, "Wow, Bugzilla really has a lot of fields to be filled in. There's no way staff will use it in out-of-the-can form." Bugzilla has a lot of features and we knew that we wouldn't need many of them. We didn't want to mandate a new method for requesting IT support that was so complex no one would use it. Issues would go unreported and staff would be frustrated. We had some policy decisions to make and customization work to do before we could begin using Bugzilla.

Initial design:

Initial customization:

Automation Services (IT) staff only customizations:

Bugzilla concerns, issues & pitfalls:

Further information:

For more information about our customization of Bugzilla, including screenshots and downloadable templates, click here.

Index


Ethereal

This section is under construction

Index


Linux on the desktop

So, now that you've read all that blather, what about using Linux on the desktop? Well I certainly use it on one of my desktop PCs for things like: e-mail, web browsing, server administration, network equipment configuration, programming and other assorted tasks. Linux is also installed on a laptop for "walking around the building" troubleshooting as well as weekend work and on-call support.

For staff computers, Windows 2000 is the only available option at this point. The Horizon client requires Windows and both the City and College are heavily invested in the MS Office Suite and Internet Explorer. The 8.0 release of Horizon is supposed to include Linux client support, at which time we will re-examine our options. One possible solution could involve using Codeweaver's crossover plugin to run the required MS applications.

Although our Horizon web catalog doesn't require Windows, our current PC time management / print cost recovery system does. I don't expect this to change anytime soon, but if it does we'll certainly investigate the possibility of moving our public computers to Linux.

Index


Future projects

What does the future hold for Linux at the Library? Some of the projects I have in mind for "down the road" include:

Index




PART PART VII: WRAP UP


Sources & further reading

Index


Thank you's

Patricia, my Wife:

Veronica Smith, my supervisor:

My parents, Mel & Fran:

Scott Hewes:

Gerald (Jerry) Carter, Open Source developer, Samba team member, author & all-around explainer:

Bill Leeb, Rhys Fulber & company:

All those people who make Linux possible:

Index


Dedication

This page is dedicated to the memory of Judith A. Houk, my friend and mentor for many years.

Index | top | Back to Eric's Linux pages


This page created February 25, 2000, based largely on an earlier document.

This page last modified October 21, 2008 by: Eric Sisler (esisler@cityofwestminster.us)