Disk Configuration
You may have one of the following setups:
- Single-disk setup: CloudRift will most likely work out of the box. The only exception if you're using a smaller logical disk for the system. In this case we need to create another logical disk for VMs.
- Two-disk setup: One disk for the system and a second disk for VM allocation.
- Multiple disks setup: One system disk and multiple additional disks that will be configured in a RAID array for VM allocation.
Single Disk Setup
Most of the time, CloudRift will work out of the box with a single disk setup. An exception is the situation when a smaller logical disk is used for the system and not all available disk space is used.
0. Check the available disks
Run lsblk
command to check available disks.
If you see a single disk occupying all the available space - you're good to go. CloudRift will work out of the box. Example output:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme1n1 259:0 0 3.6T 0 disk
nvme0n1 259:1 0 3.6T 0 disk
├─nvme0n1p1 259:2 0 1G 0 part /boot/efi
└─nvme0n1p2 259:3 0 3.6T 0 part /
If you're using a setup with a single disk, but using a smaller logical disk for the system like in the following example, you'll need to create a data disk for VMs.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 3.6T 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot/efi
├─nvme0n1p2 259:2 0 2G 0 part /boot
└─nvme0n1p3 259:3 0 3.6T 0 part
└─ubuntu--vg-ubuntu--lv 252:0 0 100G 0 lvm /
1. Create logical volume
Create a new volume for the unused space:
sudo lvcreate -L 3.4T -n datavol ubuntu-vg
2. Format the disk
Make filesystem on the new volume:
sudo mkfs.ext4 /dev/ubuntu-vg/datavol
3. Mount the logical disk
Create a mount point for CloudRift and mount a new volume.
sudo mkdir -p /media/cloudrift
sudo mount /dev/ubuntu-vg/datavol /media/cloudrift
4. Persist the Mount on Reboot
Add to /etc/fstab
the following line:
/dev/ubuntu-vg/datavol /media/cloudrift ext4 defaults 0 2
5. Verify Everything Works
sudo mount -a
df -h /media/cloudrift
It is also a good idea to perform a reboot and check run aforementioned commands to check that your disks are persisted after the reboot.
Two-disk Setup
If you have two disks, you need to ensure that the second disk is properly mounted and formatted for CloudRift to use it for VM allocation.
0. Check the available disks
Run lsblk
command to check available disks.
You should see something like this:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 447.1G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot/efi
├─nvme0n1p2 259:2 0 2G 0 part /boot
└─nvme0n1p3 259:3 0 444.1G 0 part
└─ubuntu--vg-ubuntu--lv 252:0 0 100G 0 lvm /
nvme2n1 259:4 0 3.5T 0 disk
In this example, nvme0n1
is the system disk (with mountpoints) and nvme2n1
is the second disk without a mount point.
If your second disk already has a mount point, you can skip to Next Steps. Otherwise, continue with formatting and mounting.
1. Format the disk
Format the disk to EXT4 or your preferred filesystem.
sudo mkfs.ext4 /dev/nvme2n1
2. Mount the Disk
The mount point location is not important, but we recommend using /media/cloudrift
so that you know
what this disk is being used for.
sudo mkdir -p /media/cloudrift
sudo mount /dev/nvme2n1 /media/cloudrift
3. Persist the Mount on Reboot
Add to /etc/fstab
:
sudo udevadm trigger
UUID=$(sudo blkid -s UUID -o value /dev/nvme2n1)
echo "UUID=$UUID /media/cloudrift ext4 defaults,nofail,discard 0 0" | sudo tee -a /etc/fstab
5. Verify Everything Works
sudo mount -a
df -h /media/cloudrift
It is also a good idea to perform a reboot and check run aforementioned commands to check that your disks are persisted after the reboot.
Multiple Disks Setup
Multiple disks will be formatted into RAID0 array and used for VM allocation.
0. Check the available disks
Run lsblk
command to check available disks.
You should see something like this:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 447.1G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot/efi
├─nvme0n1p2 259:2 0 2G 0 part /boot
└─nvme0n1p3 259:3 0 444.1G 0 part
└─ubuntu--vg-ubuntu--lv 252:0 0 100G 0 lvm /
nvme2n1 259:4 0 3.5T 0 disk
nvme3n1 259:5 0 3.5T 0 disk
nvme4n1 259:6 0 3.5T 0 disk
nvme1n1 259:7 0 3.5T 0 disk
1. Install mdadm
sudo apt update
sudo apt install -y mdadm
2. Create the RAID array
Run the following command to create a RAID0 array. Replace device names appropriately:
sudo mdadm --create --verbose /dev/md0 --level=0 \
--raid-devices=4 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1
3. Watch it build (should be fast for RAID 0)
cat /proc/mdstat
4. Create a filesystem
Format the array to EXT4 or your preferred filesystem.
sudo mkfs.ext4 /dev/md0
5. Mount the array
The mount point location is not important, but we recommend using /media/cloudrift
so that you know
what this disk is being used for.
sudo mkdir -p /media/cloudrift
sudo mount /dev/md0 /media/cloudrift
6. Persist on boot
Add to /etc/fstab
:
sudo udevadm trigger
UUID=$(sudo blkid -s UUID -o value /dev/md0)
echo "UUID=$UUID /media/cloudrift ext4 defaults,nofail,discard 0 0" | sudo tee -a /etc/fstab
7. Verify Everything Works
sudo mount -a
df -h /media/cloudrift
It is also a good idea to perform a reboot and check run aforementioned commands to check that your disks are persisted after the reboot.
Next Steps
To test the node configuration and make your nodes rentable, proceed to the Memory Configuration guide.