Beowulf Cluster Construction Step-By-Step

The Protogonus Logo

Protogonus is our Beowulf cluster used for preparing, debugging and inspecting various parallel applications that use MPI.

This guide is provided as-is. We cannot be held liable for any outcome or damage, either direct, consequential, or otherwise.

Last revision: April 9, 2024

The step-by-step guide is a well-organised extract of our construction journal. The end result of these steps is a Beowulf cluster using Ubuntu Server, Clush, and Open MPI. This cluster is behind its own firewall and LAN, with the nodes using WIFI connection.

More detailed information and further resources can be found in the journal and in our Beowulf resources section.

How to use this guide?

  1. Read the guide first.
  2. Read our Beowulf development journal. This provides more details in the form of storytelling. Failures and dead ends are also documented.
  3. Return to this guide and implement its steps.

Obtain the hardware

  1. The support computer. This will not be part of the cluster itself.
  2. A USB stick to be used as the Ubuntu installation medium.
  3. The computers designated to be nodes. One of them (the most powerful one) will be the Head Node, and the others the Worker Nodes.
  4. An ethernet cable. This allows you to download the WIFI module to the node being installed.
    → You can most likely also install the WIFI components by first saving the related files to the USB stick, installing them later.
    → Using switches and cables in lieu of WIFI should result in a faster cluster. For this, obtain all the required cables and switches.
  5. Extension cords for the power supply.

Prepare the Support Computer Optional

This is an additional computer you can decide not to turn into a node. We are assuming that it runs Windows. This step is optional: you can also do your work directly on the Head Node.

  • Download and install Rufus. This is used for creating the bootable USB.
  • Download the latest version of the Ubuntu Server image. This will be put on the USB stick using Rufus.
  • Download and install Putty. This will be used for connecting to the nodes via SSH.

Create the bootable USB

Use Rufus on the support computer and create the bootable install USB carrying the Ubuntu Server. If asked, you should most likely use the GPT setting in Rufus.
Learn more here.

Set up the LAN and the VPN

The layout we chose is shown here.

Once your LAN & VPN are set up:

  • Check that the internet is accessible from your Beowulf subnetwork.
  • Check that your Beowulf subnetwork can be reached from the outside.
  • Within the Cluster LAN, take note of your subnet and your gateway.

Designate your Head Node

Go through the computers you wish to use as Beowulf nodes. Pick the most powerful one. This will serve as the Head Node. All the other computers will become Worker Nodes.

A benchmarking software you can use for this purpose is Novabench.

Pick two usernames and their passwords Pen and paper

Write them down for later use. These usernames and passwords will be the same on each computer. Please use strong passwords. Passwordless login will be enabled at a later stage.

  1. One of these usernames will be used as the general system administrator.
  2. The other one will be the user for the MPI system.
    You will also need to come up with a uid for this MPI user. Anything below 1000 is OK (such as 911). This uid needs to be the same on each node for MPI to work as expected.

Plan the IP address and hostname scheme Pen and paper

  • Decide on fixed IP4 addresses for the nodes.
    → The Head Node could become 100, and the Worker Nodes 100+i.
    That is, the Head Node can become 192.168.1.100, the first Worker Node can become 192.168.1.101, the second Worker Node can become 192.168.1.102 etc.
  • Decide on hostnames for the nodes.
    → A numbered naming scheme allows for easy management and execution. E.g. node000 for the Head Node, node001 for the first Worker Node, node002 for the second Worker Node etc.

Milestone reached The next step is to install Ubuntu on each node.

Install the nodes one after the other Node-by-node

Create the install partition using Windows Optional

If you decide to completely erase Windows, you can skip this step. The Ubuntu installer will make this outcome possible.

Should you want to retain a Windows partition as well, however, you will need to format the newly created partition still from Windows.

Boot the node with the USB stick

This step turned out to be the most annoying and most time-consuming one of them all.

  • Do not plug in the network cable yet.
  • You will most likely have to open the BIOS / UEFI settings, disable secure boot, disable OS optimized boot, and set the boot order so that the USB stick is used for booting. Opening the BIOS (UEFI) settings menu differs by manufacturer; you can also use a special Windows restart to reach the UEFI menu.

If boot is successful, you will see the grub menu offering you to install Ubuntu Server.

Install Ubuntu on the node

  • make sure that the LAN cable is not plugged in yet
  • do not use LVM
  • do not use Open SSH yet; this will be installed later
  • provide the username and password for the admin user (the same for each node). This is not the MPI user. The MPI user will be added later.

If installation is successful:

  1. Plug in the network cable.
  2. Reboot Ubuntu.
    → Any boot-time errors and timeouts can be fixed later using our resources section.
  3. Log in with the admin username and password.
  4. sudo apt update
    → If there is a network problem, you will see it at this point.
    → DHCP might cause timeout and prevent connecting to the network. Setting up the fixed IP as detailed in a following step most likely solves this problem.

Wipe empty space Optional

If you are using older computers, it may make sense to wipe their empty space. This prevents any potential trouble arising from GDPR or any other confidentiality reason at a later stage. This can be a time-consuming process that is quick to start.
→ Wiping is described here (see also the description related to secure-delete’s sfill).
→ Based on our experience, directly logging in to the nodes is the most stable solution.
→ If you want to take advantage of the clustered setup, you can try ssh or clush, deferring this step to when the entire cluster is set up.

  1. cat /dev/zero > zero.file
  2. sync
  3. rm zero.file

Install a few networking tools

  1. sudo apt install network-manager
  2. sudo apt-get install nmap
  3. sudo apt install net-tools
  4. sudo apt install clustershell

Install WIFI Optional

If you do not wish to use WIFI, you can skip this step and keep accessing the network through cables. Nevertheless, even if this is the case, having WIFI access can make your setup more flexible in case of unforeseeable events.

  1. Keep the network cable plugged in at this point.
  2. sudo nmcli dev wifi connect <SSIDofYourWifiAccessPoint> password <YourWifiPassword>

Set up fixed IP

Use ip route to find your gateway IP. Use the following commands entering YourConnectionName and YourDefaultGateway as applicable.

  1. sudo nmcli con mod YourConnectionName ipv4.addresses xxx.xxx.xxx.xxx/24
  2. sudo nmcli con mod YourConnectionName ipv4.method "manual"
  3. sudo nmcli con mod YourConnectionName ipv4.dns "8.8.8.8 8.8.4.4"
  4. sudo nmcli con mod YourConnectionName ipv4.gateway YourDefaultGateway

Restart the connection with the following two commands, entering them on a single line with ; in between, as shown below. This allows you to reconnect through Putty. The connection will be lost either way.
sudo reboot is an alternative.

sudo nmcli con down YourConnectionName; sudo nmcli con up YourConnectionName

If the confirmation messages look OK, you can unplug the network cable for WIFI-only layouts.
→ test the network using ping google.com

Add the mpiuser

Use the details you decided on in the applicable earlier step.

sudo adduser YourMpiUserUserName --uid YourMpiUserUID

Install Open MPI

Head Node

  1. sudo apt-get install libopenmpi-dev
  2. sudo apt-get install openmpi-bin
  3. sudo apt-get install openmpi-doc
  4. sudo apt-get install make
  5. sudo apt-get install g++mpirun

Worker Node

sudo apt-get install openmpi-bin

Install the NFS server

Head Node

sudo apt-get install nfs-kernel-server

Worker Node

sudo apt-get install nfs-common

Include the hostnames in the hostfile

Follow the instructions about the structure here.
→ The host file will have the same contents on each node.
→ Ensure that only a single line contains 127… (as `localhost`), while all other lines reflect the actual IPs and hostnames of the nodes.

You can use sudo nano /etc/hosts to do the editing. Perhaps it makes sense to prepare the hostfile for future nodes as well, not limiting the entries to the nodes getting installed in this round.

Set up SSH

  1. sudo apt install openssh-server
  2. sudo ufw allow ssh
  3. sudo systemctl status ssh

It is now possible to log in to the node from the support computer using Putty. The first ssh login will produce a warning message. This explains that the connection is new. If you save the connection, this will not be repeated. This is also the case if logging in using ssh from another Linux computer (WSL included) instead of Windows' Putty.

Mark the node

Put stickers on the node, and also on both ends of each cable. These stickers should show the number of the node. This reduces confusion later.

Milestone reached Repeat the above steps for each node.

Arrange the cluster

  1. Arrange the nodes in their physical location and plug them in.
  2. Start the WIFI, if applicable.
  3. Start up all the computers.

Milestone reached Putty to the Head Node and continue from there.

Finish setting up the cluster Work from the Head Node

Set up passwordless login

The description is here and is fairly straightforward.
→ Repeat the steps for both the admin user and the mpiuser.

List all the IPs of the nodes

sudo nmap -sP <YourSubnetIP>/24

If any worker is missing, check it & troubleshoot.

Ping the Head Node from each Worker Node

You can use IP addresses or hostnames, and work with clush or ssh. Clush could look something like:

clush -w NODE[001-00n] -L ping -c 3 master

Here 001–00n should reflect your actual Worker Node hostname ranges, as clush can use [ ] for ranges. If you spot any network problems at this point, address them.

Check internet access

Network will be needed for updating Ubuntu even if you do not want the Worker Nodes to access the internet in general.

ping -c 3 google.com for the Head Node.

clush -w NODE[001-00n] -L ping -c 3 google.com for the Worker Nodes.

Share the mpiuser folder using NFS

To share the entire mpiuser folder, add the line

/home/mpiuser *(rw,sync,no_subtree_check)

to /etc/exports, for example using sudo nano /etc/exports. For less clutter, you can also decide to share just a specific subfolder. To achieve this, switch to the user mpiuser, create a shareable subfolder and share only that. This might lead to somewhat less flexibility.

Once you have updated /etc/exports, share the folder:

  1. sudo service nfs-kernel-server restart
  2. sudo ufw allow from 192.168.251.0/24

Ensure that the Worker Nodes have access to the shared folder. Troubleshoot if needed.

clush -w NODE[001-00n] -L showmount -e <HeadNodeIP>

Mount the NFS folder on the Worker Nodes

Anything before the clush command here is our workaround for the problem of entering the sudo password:

echo <YourPassword> | clush -w NODE[001-00n] -L sudo -S mount <HeadNodeName>:/home/mpiuser /home/mpiuser

Test access as follows, and troubleshoot as needed.

  1. sudo touch /home/mpiuser/test
  2. clush -w NODE[001-00n] -L ls /home/mpiuser

Implement automount

On each Worker Node, add the following line to /etc/fstab

<HeadNodeName>:/home/mpiuser /home/mpiuser nfs

You can use ssh and passwordless login together with nano to achieve this. Copying an fstab file via the mounted mpiuser folder is also a possibility.

Final touches

Use sudo reboot on each Node. You can also use clush to reboot the cluster. Once the reboot is finished, your Beowulf cluster should be ready to do your HPC work. Upgrade Ubuntu as needed.

Milestone reached C O N G R A T U L A T I O N S ! ! !


Learn More

You can read the Executive Summary of Protogonus and our Construction Journal. We also provide a collection of www resources for building a Beowulf cluster.