Skip to content

SLURM Standalone on OpenStack

Launch Login Node

To set up a cluster, you will need to import a Flight Solo image.

Before setting up a cluster on Openstack, there are several required prerequisites:

  • Your own keypair
  • A network
  • A router
    • With an interface both on the External Gateway network and an Internal Interface on the previously created network
  • A security group that allows traffic is given below (if creating the security group through the web interface then the "Any" protocol will need to be an "Other Protocol" rule with "IP Protocol" of -1)
Protocol Direction CIDR Port Range
Any egress any
Any ingress Virtual Network CIDR any
ICMP ingress any
SSH ingress 22
TCP ingress 80
TCP ingress 443
TCP ingress 5900-5903


The "Virtual Network CIDR" is the subnet and netmask for the network that the nodes are using. For example, a node on the network with a netmask of would have a network CIDR of

The documentation includes instructions for importing an image to Openstack, and guides for setting up the other prerequisites can be found in the Openstack documentation

To set up a cluster:

  1. Go to the Openstack instances page.

  2. Click "Launch Instance", and the instance creation window will pop up.

  3. Fill in the instance name, and leave the number of instances as 1, then click next.

  4. Choose the desired image to use by clicking the up arrow at the end of its row. It will be displayed in the "Allocated" section when selected.

  5. Choose the desired instance size by clicking the up arrow at the end of its row. It will be displayed in the "Allocated" section when selected.

  6. Choose a network in the same way as an image or instance size. Note that all nodes in a cluster must be on the same network.

  7. Choose a security group in the same way as an image or instance size. Note that all nodes in a cluster must be in the same security group.

  8. Choose the keypair in the same way as an image or instance size.

  9. In the "Configuration" section, there is a "Customisation Script" section with a text box. This will be used to set your user data

  10. When all options have been selected, press the "Launch Instance" button to launch. If the button is greyed out, then a mandatory setting has not been configured.

  11. Go to the "Instances" page in the "Compute" section. The created node should be there and be finishing or have finished creation.

  12. Click on the down arrow at the end of the instance row. This will bring up a drop-down menu.

  13. Select "Associate Floating IP", this will make the ip management window pop up.

  14. Associate a floating IP, either by using an existing one or allocating a new one.

    1. To use an existing floating IP:

      1. Open the IP Address drop-down menu.

      2. Select one of the IP Addresses.

      3. Click "Associate" to finish associating an IP.

    2. To allocate a new floating IP:

      1. Click the "+" next to the drop-down arrow to open the allocation menu.

      2. Click "Allocate IP".

  15. Click "Associate" to finish associating an IP.

General Configuration

Create Node Inventory

  1. Parse your node(s) with the command flight hunter parse.

    1. This will display a list of hunted nodes, for example

      [flight@login-node.novalocal ~]$ flight hunter parse
      Select nodes: (Scroll for more nodes)  login-node.novalocal -
         compute-node-1.novalocal -

    2. Select the desired node to be parsed with Space, and you will be taken to the label editor

      Choose label: login-node.novalocal

    3. Here, you can edit the label like plain text

      Choose label: login1


      You can clear the current node name by pressing Down in the label editor.

    4. When done editing, press Enter to save. The modified node label will appear next to the ip address and original node label.

      Select nodes: login-node.novalocal - (login1) (Scroll for more nodes)  login-node.novalocal - (login1)
         compute-node-1.novalocal -

    5. From this point, you can either hit Enter to finish parsing and process the selected nodes, or continue changing nodes. Either way, you can return to this list by running flight hunter parse.

    6. Save the node inventory before moving on to the next step.


      See flight hunter parse -h for more ways to parse nodes.

Add genders

  1. Optionally, you may add genders to the newly parsed node. For example, in the case that the node should have the gender cluster and all then run the command:
    flight hunter modify-groups --add cluster,all login1

SLURM Standalone Configuration

  1. Configure profile

    flight profile configure
    1. This brings up a UI, where several options need to be set. Use up and down arrow keys to scroll through options and enter to move to the next option. Options in brackets coloured yellow are the default options that will be applied if nothing is entered.
      • Cluster type: The type of cluster setup needed, in this case select Slurm Standalone.
      • Cluster name: The name of the cluster.
      • Default user: The user that you log in with.
      • Set user password: Set a password to be used for the chosen default user.
      • IP or FQDN for Web Access: As described here, this could be the public IP or public hostname.
  2. Apply an identity by running the command flight profile apply, E.g.

    flight profile apply login1 all-in-one


    You can check all available identities for the current profile with flight profile identities

  3. Wait for the identity to finish applying. You can check the status of all nodes with flight profile list.


    You can watch the progress of the application with flight profile view login1 --watch


Congratulations, you've now created a SLURM Standalone environment! Learn more about SLURM in the HPC Environment docs.

Verifying Functionality

  1. Create a file called, and copy this into it:

    #!/bin/bash -l
    echo "Starting running on host $HOSTNAME"
    sleep 30
    echo "Finished running - goodbye from $HOSTNAME"

  2. Run the script with sbatch, and to test all your nodes try queuing up enough jobs that all nodes will have to run.

  3. In the directory that the job was submitted from there should be a slurm-X.out where X is the Job ID returned from the sbatch command. This will contain the echo messages from the script created in step 1