Protogonus: The FINAL Labs™ HPC Cluster
Protogonus is our Beowulf cluster used for preparing, debugging and inspecting various parallel applications that use MPI.
Protogonus is our on-premise Beowulf cluster used for learning, and also for developing, testing, and sometimes implementing high-performance computing approaches to various areas of research interesting to us.
The system was created during the COVID-19 pandemic, when we realised that the old and unused computers that we had lying around were too old and slow to be donated for home schooling and, for the same reason, could not be sold at a reasonable price, either. Scrapping these otherwise properly functioning computers seemed to be a waste of resources and a strain on the environment. We concluded that a lean install might retain sufficient speed and provide a fun and exciting project that might even be useful for our research activities. This is how the decision to build Protogonus was made.
Protogonus in mythology
Protogonus is the creator god, the First Born in Greek mythology responsible for creating the universe, for procreation and for generating new life.
While the researchers at FINAL Labs™ do not see themselves as creators of new life, we consider this mythical name to be appropriate when describing this computation tool that we sometimes use.
Layout of the cluster
Boundary
The boundary is a router and firewall system providing access to and from the dedicated network, and also holding together all elements of the dedicated HPC network. A VPN tunnel is created from the encompassing LAN to the Head Node.
Networking
The cluster uses its dedicated subnetwork that connects to its encompassing LAN network and the internet via the boundary.
Nodes
Protogonus consist of a head node and an ever-increasing number of homogenous compute nodes.
The purpose of the head node is to orchestrate the operation of the compute nodes. In our current setup, the head node also provides the shared storage bucket for compute nodes. Users accessing the system via the boundary manage the system primarily through the head node, even though the individual compute nodes can also be accessed this way.
The compute nodes perform segments of the specific calculation tasks assigned to the cluster, as arranged by the head node, and, of course, the loaded application. They use the head node and its network-attached bucket to receive and return data and to communicate.
Software
- Windows 10: Putty and VPN for the remote connection
- Linux: Ubuntu Server with WIFI, SSH and several auxiliary software including Clustershell; Open MPI with Fortran and C++
Learn More
You can read our Construction Journal. We also provide a collection of www resources for building a Beowulf cluster, and step-by-step instructions based on our experience.