SPCM HPC Cluster Management
Another Programmer's Editor
Personal Github Site
Auer Lab Github Site
SPCM - Simple, Portable Cluster Manager
SPCM is a free, open source integrated tool set for managing a simple HPC (High Performance Computing) cluster.
It is the only portable cluster management suite we are aware of and is designed to be easily adapted to most POSIX platforms.
SPCM automates the configuration and management of head nodes, compute nodes, file servers, and visualization nodes. Most common management tasks can be performed using a simple menu interface, while additional tasks are supported by command-line tools.
SPCM automatically installs and integrates the SLURM scheduler and the Ganglia web-based network monitoring suite with the Apache web server.
SPCM is currently beta-quality with a moratorium on major new features until the existing code reaches a high level of cleanliness and robustness. Only a few new features, such as integrating parallel file systems, are planned for the more distant future.
SPCM was developed over many years as an in-house tool to manage production HPC clusters at the University of Wisconsin -- Milwaukee. While the code still needs some cleanup and redesign, most of it is fairly evolved and stable. Efforts for the foreseeable future will focus on refactoring and improving the user interface.
The images below should provide a basic idea of what SPCM is about. Most day-to-day management can be done from the terminal-based menu interface, while more advanced tasks are done via the command-line. SPCM automatically configures a web server with the Ganglia monitoring system.
The design philosophy centers on simplicity and performance. These ideals are achieved in-part by minimizing interdependence of cluster nodes. Each compute node contains a fully independent operating system installation and critical software installations on its own local storage. Compared with clusters that utilize shared storage more extensively, this strategy increases cluster setup and maintenance time slightly in exchange for simpler management, less "noise" on the local network, fewer single points of failure, and fewer bottlenecks.
Core design principals:
Implementation of this design is facilitated by leveraging the systems management tools provided by the base system, including the FreeBSD ports system which automates the installation of most mainstream open source applications. The pkgsrc package manager is used on other platforms. Yum is used for the most basic system services and commercial software support on RHEL derivatives, with pkgsrc providing SLURM, Apache, and newer versions of many user tools such as editors.
The SPCM tools are written almost entirely in POSIX Bourne shell using standard Unix tools to configure the system, and utilizing ports/packages for all software management.
From a networking perspective, a typical HPC cluster is a LAN (Local Area Network): A group of computers on a private network behind a router configured with NAT (Network Address Translation). The difference between your home or office LAN and an HPC cluster is in the power of the nodes, the speed of the local network equipment, and the software deployed on it.
In theory, an HPC cluster could simply be a subset of nodes on a general-purpose network. However, using a dedicated switch behind a router isolates all the cluster network traffic, which will improve performance for file server access and parallel program communication, while also eliminating impact on the outside network.
In many clusters, the head node is multi-homed (has two network interfaces) and serves as the gateway for the cluster. SPCM allows for this configuration, but be aware that it complicates the setup of the head node as well as configuration of many services running on the head node, including the scheduler and the Ganglia resource monitor.
The recommended hardware configuration uses a single network interface on every node, including the head node, a separate router/gateway, and a dedicated switch for all cluster traffic. Many modern network switches have built-in routing capability and can serve as both router and local switch. If you're using a simple switch without routing capability for your cluster, you can use an inexpensive hardware router or quickly and cheaply build a sophisticated firewall router using any PC with two network adapters and pfsense or OPNsense.
A dedicated router appliance is also easier and likely more secure than a head node configured as a gateway. Tools like pfSense are written and maintained by networking experts and provide a convenient web interface for configuration.
In addition, this topology allows direct connection from outside the cluster to any node via port forwarding with different TCP ports. No need to run additional cables to isolate large transfers to/from the file servers. For example, incoming SSH connections on port 22 can be routed to the head node, while connections on port 22001 can route to a file server. In this way, the head node is spared the network load of file transfers, which can annoy interactive users. On a large cluster, you can use faster network interfaces on both the WAN and LAN side of the router to support the full bandwidth of multiple servers within the cluster. E.g., if your cluster nodes all use gigabit interfaces, using 10 GbE on the router will support multiple file transfers to/from different servers at full gigabit speed while still leaving plenty of bandwidth for the head node.
If you plan to use the SPCM PXE installer for non head nodes, you'll probably want to disable the DHCP server on the router. SPCM can automatically configure your head node as a DHCP and PXE server.
General Node Configuration
It's best to keep the load on the head node as low as possible to ensure snappy response times for shell sessions and for the scheduler. Hence, the head node should not double as a compute node or as a file server for large amounts of data. We generally house /home on the head node so that it is fully functional even when all other nodes are down, but with a very small quota (e.g. 250 MiB). Scientific data are stored on separate file servers so that heavy network traffic and disk loads are isolated.
A compute node can double as a file server and this has the advantage that jobs running on that node have direct access to the disks rather than using the network via NFS, Gluster, etc. If you do this, be aware that ZFS by default will consume most or all available RAM, thus competing with computational processes. To prevent performance problems, you can limit ZFS adaptive read cache (ARC) to a few gigabytes and subtract the same amount from RealMemory for that node in your slurm.conf.
Accessing the head node and file servers via NFS may mean cross-mounting them (each is an NFS server for the other), which can cause a deadlock during boot while each waits for the other to enable NFS. On FreeBSD, this issue is easily solved using background mounting (bg flag in /etc/fstab). Note, however, that background mounting does not work on RHEL/CentOS 7 due to incompatibilities with systemd, so if you cross-mount, you will need a more complex setup using autofs or the noauto mount flag + a cron job for late mounting.
Visualization nodes should be considered a tool for quick-and-dirty viewing of results. For more sophisticated viewing, users should download the data to a workstation where they can utilize the local display for best graphics performance.
The SLURM scheduler allows any node, including the head node, to be rebooted without affecting running jobs. Hence, it is not generally necessary to maintain a backup head node. It is advisable to have another server handy that can boot from the head node's disk(s) in the event of a motherboard failure. It need not be an identical server, but it should be easy to move the head node's disks over for quick restoration of service. A compute node can serve this purpose if the hardware is similar.
The head node and file servers should generally be on battery back-up, but not the compute nodes. Keeping compute nodes running through a power outage would require a truckload of batteries for a large cluster and greatly reduce battery run-time for even a very small cluster.
The head node need not be powerful, but should be very reliable. For large clusters, it is recommended that the head node have redundant power supplies and boot from a RAID with hot swappable disks. ZFS can be utilized to construct a RAID without the need for a hardware RAID controller, though replacing disks in a software RAID is a little more involved. SLURM may use a fair amount of RAM on the head node of a large, busy cluster, but does not need much CPU.
For a small personal cluster, a laptop actually makes a pretty good head node with its built-in battery backup, keyboard, and monitor.
File servers should be similarly reliable, with redundant power supplies and RAID. You may want to equip them with faster network interfaces than the compute nodes, as they may be greatly outnumbered, or a gigabit network interface may be a bottleneck next to the RAID capability. Our benchmarks showed little difference in performance between SAS and SATA disks. If you use SATA, however, be sure that they are server-grade. Low-end SATA disks designed for PCs may not offer the same performance or may not be rated for use in large RAIDs due to vibrational characteristics. File servers should have plenty of RAM for buffering to allow reordering of I/O operations. A few fast processors will serve better than many slower ones in most settings.
Resources for compute nodes should be put toward CPU and RAM. Redundant power supplies are of little use unless interruption of jobs would be catastrophic in your setting. Maximizing MIPS/$ is usually a good strategy for general-use clusters, so the highest-end CPUs are usually not a good value. 16 medium-speed cores will generally be a better value than 8 high-speed cores, unless you need to run many jobs that don't scale well to large numbers of processes. High density (maximizing power per compute node) will generally reduce your electric bill (which will be significant!) and avoid space issues down the road.
Keep in mind that clusters typically reduce months or years of computation to hours or days regardless of the compute node specs. Doubling the cost of a cluster for the 20% gain in speed you get from the latest-and-greatest processors is usually nothing but foolish ego fodder.
Currently Supported Platforms: FreeBSD and RHEL family
Redhat Enterprise Linux (RHEL) and it's derivatives are the de facto standard operating systems for HPC clusters. They are more stable than bleeding-edge Linux distributions, have strong support for HPC system software like Infiniband drivers, parallel file systems, etc., and are the only POSIX platforms officially supported by most commercial scientific software vendors.
The main disadvantages of enterprise Linux platforms (compared to FreeBSD or community Linux distributions such as Debian and Gentoo) are use of outdated kernels and packages available in the Yum repository. (Stability and long-term binary compatibility in enterprise Linux systems is maintained by running older, time-tested, and heavily patched versions of system software.)
We've had great success using pkgsrc to manage more up-to-date open source software on RHEL derivatives. The pkgsrc system is well-supported on Linux, offers far more packages than Yum, and can install a complete set of packages that are almost completely independent from the base Linux installation.
FreeBSD's unparalleled reliability, near-optimal efficiency, and easy software management via the FreeBSD ports collection make it an ideal platform for HPC clusters. There is no better platform for running scientific software that requires modern development tools or lengthy uninterrupted up time. FreeBSD is the only operating system we've found that offers enterprise reliability and system management features (binary updates, fully-integrated ZFS, etc) combined with top-tier development tools and software management (Clang/LLVM base compiler, FreeBSD ports, etc.).
An example of FreeBSD's reliability is provided by Peregrine, a FreeBSD HPC cluster built for educational use at the University of Wisconsin -- Milwaukee. Peregrine has never had a node crash or freeze in the absence of a hardware problem in 8 years of service, despite running some extremely intensive jobs that caused outages on other clusters. The only reliability issues encountered were a few head node crashes, traced to a Dell PowerEdge firmware bug affecting single-processor systems, and a compute node crash caused by a bad memory slot.
FreeBSD is the basis of many products used in HPC including FreeNAS, Isilon, NetApp, OPNSense, Panasas, and pfSense.
Many FreeBSD HPC clusters are in use today, serving science, engineering, and other disciplines. FreeBSD is a supported platform on Amazon's EC2 virtual machine service. It is also a little-known fact that the special effects for the movie "Matrix" were rendered on a FreeBSD cluster.
FreeBSD can run most Linux binaries natively (with better performance than Linux in some cases), using its CentOS-based Linux compatibility module. This module is *NOT* an emulation layer. It simply adds Linux system calls to the FreeBSD kernel so that it can run Linux binaries directly. Hence, there is no performance penalty. The only added cost is a small kernel module and modest amount of disk used to house the module and Linux software.
Skills Required to Manage an SPCM HPC Cluster