Automatic mounting of additional volumes using systemD on Ubuntu



In this tutorial I will explain how to mount your additional volumes on Ubuntu 16.04 with systemd.

  • At first create the directory in which we will mount the volume : mkdir -p /mnt/data
  • Now create a file system on the volume, in my example I use ext4: mkfs.ext4 /dev/nbd1
  • Now we need to create a systemd-script that mounts the volume automatically on boot. Create / Edit the file /etc/systemd/system/mnt-data.mount and put the following content in it:

Description=Mount NDB Volume at boot



  • To find the UUID of your volume, you can type: blkid
  • The file name of the script shall correspond to the path where you mount the volume (/mnt/datamnt-data.mount)
  • Now reload systemd: systemctl daemon-reload
  • Launch the script to mount the volume: systemctl start mnt-data.mount
  • Finally enable thhe script to mount your volume automatically during boot: systemctl enable mnt-data.mount

How to attach and detach additional volumes to an existing C1 server | Scaleway
How to attach and detach additional volumes to an existing C1 server | Scaleway

Thank you so much for this @bene , it worked for me without the double quotes " " around the UUID on line 5 of your file. This was post boot though so am going to test it with a few reboots :slight_smile:


Note that in case you want to mount to a location other than /mnt/data you have to change the file name accordingly. e.g. mounting to /root/test would need the file to be named root-test.mount.


Thanks a lot @bene for this tutorial, was looking for it.


Thanks for the guide it works great. But I have question please how to add multiple volumes (nbd2, nbd3, etc) ?

Appreciate your feedback @bene


Thank you very much for that nice tutorial.

I tried the same with a Luks encrypted volume but without success.
Does there exist a tutorial how to mount Luks encrypted volumes at boot?


Wow. Why is this still not fixed. I was trying to create a server VC1M with standard 100gig of data. but it always failed to mount the standard mountable disk. so i tried your version as stated above. And it finally works. so much better. Should be the official tutorial for ubuntu 16.10+


FYI, on ubuntu 16.04+, you can also simply add your additional volumes to /etc/fstab

Example with volume /dev/vdb
1/ Format the volume

mkfs -t ext4 /dev/vdb

2/ Create mount directory

mkdir -p /mnt/data

3/ Manually mount to check if everything is okay

mount /dev/vdb /mnt/data

4/ Add to /etc/fstab

/dev/vdb /mnt/data auto defaults,nofail,errors=remount-ro 0 2

5/ reboot and check volume using df -h or whatever you like !

The problem is the Scaleway official tutorial recommending to use the “nobootwait” option which is no longer available and will prevent your server from booting… Ubuntu recommends using the nofail option instead of nobootwait from now on. Not sure what the difference is but it works for me.

Hope it helps


How to do this on Centos 7


Thanks @bene ! Your approach works!


I created a bash script that does what described here

Usage example:

root@server:~ ./ /dev/nbd1 /mnt/data
Formatting /dev/nbd1
Device /dev/nbd1 UUID: d42b7140-7f36-4674-a171-7e782f2dfa96
Creating service: /etc/systemd/system/mnt-data.mount
Starting service

root@server:~ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/nbd0        46G  1.3G   43G   3% /
/dev/nbd1       138G   60M  131G   1% /mnt/data



Thx a lot for your script. I just couldn’t mount additional volume (nbd1) using fstab on boot on Centos 7.3.
I had entry in fstab and after boot when I execute mount -a volume gets mounted but automount on boot was failing no matter what option I tried. With your script everything is working.