How to attach and detach additional volumes to an existing C1 server | Scaleway


#1

How to attach and detach additional volumes to an existing C1 server

This page shows how to attach and detach additional volumes to an existing server.

Requirements

  • You have an account and are logged into cloud.scaleway.com
  • You have configured your SSH Key
  • You have a server

Each server can have at most 15 volumes, including the root volume. The type of disk to host your volumes use the LSSD technology: Local solid state drive, to deliver fast disk I/O.

LSSD volumes are teleported close to your server.

When you start a server for the first time, your volume files are downloaded from the volumes store to the local storage devices (LSSD).

Each time you start or stop a server, the volumes are downloaded or uploaded to the volumes store. The larger the amount of data to transfer, the longer the upload or download duration.

We work constantly on optimizing the transfer time of local storage devices to the volumes store.

There are five steps to attach a volume to an existing server

  • Create a new volume
  • Attach the volume to your server
  • Format the additional volume
  • Mount additional volumes manually
  • Mount additional volumes with fstab (automatic mount)

Important: Server must be powered off to attach or detach a volume.

Attach a volume to an existing server

In the Control Panel, click “Volumes” in the compute section.

Step 1 - Create a new volume

Click the “Create Volume” button. You will land on the volume-creation page where you must input basic information for your volume:

  • The name of your volume
  • The volume type - LSSD (Local solid state drive)
  • The size in Go

Step 2 - Attach an existing volume to your instance

In the servers page, click on the server you want to attach a volume to.

On the server detail page click “Attach an existing volume” and select the volume to attach in the list.

Important: To detach the volume, click the Detach button.

Step 3 - Format the additional volume

If the new volume has never been formatted, you need to format the volume using mkfs before you can mount it.

For instance, the following command creates an ext4 file system on the volume:

root@c1-X-Y-Z-T:~# mkfs -t ext4 /dev/nbd1
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
610800 inodes, 2441406 blocks
122070 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2503999488
75 block groups
32768 blocks per group, 32768 fragments per group
8144 inodes per group
Superblock backups stored on blocks:
  32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

Step 4 - Mount your additional volume manually

To mount the device manually as /mnt/data, run the following commands:

root@c1-X-Y-Z-T:~# mkdir -p /mnt/data
root@c1-X-Y-Z-T:~# mount /dev/nbd1 /mnt/data
root@c1-X-Y-Z-T:~# ls -la /mnt/data/
total 24
drwxr-xr-x 3 root root  4096 Jan  1 00:07 .
drwxr-xr-x 3 root root  4096 Jan  1 00:07 ..
drwx------ 2 root root 16384 Jan  1 00:07 lost+found

Step 5 - Mount your additional volume with fstab (automatic mount)

To mount the additional volume automatically, you have to reference your devices in the /etc/fstab file. /etc/fstab references all devices to mount when they are connected.

For instance to mount /dev/nbd1 device automatically to the /mnt/data directory, the /etc/fstab has the following content:

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/nbd1 /mnt/data auto  defaults,nobootwait,errors=remount-ro 0 2

The configuration above mounts the /dev/nbd1 device to the /mnt/data directory with fstab default option and nobootwait (available on Ubuntu only). nobootwait is set to prevent boot problems in the case your volume is not yet downloaded to the local storage.

Create the /mnt/data directory if it doesn’t exist.

root@c1-X-Y-Z-T:~# mkdir -p /mnt/data

To check devices are mounted properly, run the mount -a command to mount all devices.

Important: On the next server boot, your volumes will be mounted automatically.

Now run the df -h command, this command will list all your devices and where they are mounted:

root@c1-X-Y-Z-T:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/nbd0        23G  420M   22G   2% /
none           1010M   36K 1010M   1% /dev
none            203M   80K  203M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none           1012M     0 1012M   0% /run/shm
none            100M     0  100M   0% /run/user
/dev/nbd1       9.2G  149M  8.6G   2% /mnt/data

Try this tutorial on your own C1 server TRY IT


This is a companion discussion topic for the original entry at https://www.scaleway.com/docs/attach-and-detach-a-volume-to-an-existing-server/

#2

It’s not woking for Debian Jessie. When I add the line to fstab, I get the following error on boot:

[ *** ] A start job is running for dev-nbd1.device (55s / 1min 30s)


#3

Having same issue with fstab as @eulergui on C2S running Debian Jessie


#4

With ansbile it is much easier : just use this role :
ansible scaleway mount


#5

If I understand correctly, this is not possible with the VPS (because you can’t attach additional volumes - Cf. Pricing page). To add to Requirements ?


#6

Do volumes work?
/dev/nbd0 x86_64-ubuntu-wily-2016-03-17_18:04 - 50GB
/dev/nbd1 150GB

Server will not boot with /dev/nbd1 attached

It is formatted and worked after set-up using:
mount /dev/nbd1 /mnt/data

Glad I’m only testing. How do I get this to work?


#7

Do you have to wait until you have an export uri address:port?

Is it best to add both volumes to fstab? - No just volume :smile:

#/dev/nbd0 /                   auto  defaults,nobootwait,errors=remount-ro 0 1
/dev/nbd1 /mnt/data     auto  defaults,nobootwait,errors=remount-ro 0 2

How long does it take to start a server with 150GB volume attached?
How long does it take to reboot a server with 150GB volume attached?

boot issues:

>>> 'ntpdate -d -b -p 6 ip.ip.ip.ip' failed

Firewall is next:


#8

Seems like the automatic mount doesn’t work with the mentioned example in Ubuntu 16.04 LTS (with or without nobootwait, doesn’t matter):

/dev/nbd1 /mnt/data auto  defaults,nobootwait,errors=remount-ro 0 2

Any idea?


#9

I would like to know too please.


#10

Is it possible to restore the fstab file so i can at least boot the server?


#11

On ubuntu 16.04 you should use the uuid to mount the volume. It’s very simple, and has become the new standard in the unix world.

https://help.ubuntu.com/community/UsingUUID


#12

Hello All,

On Debian jessie, the following line in your /etc/fstab is working

/dev/vdb /mnt/data ext4 defaults 0 2
kr


#13

Hello All,
I have this problem using Ubuntu 16.04, and have worked round it in the following way.

  1. The option, nobootwait is not supported by fstab on Ubuntu. I used nofail instead.
    This allowed the boot process to continue, but the external volume /dev/nbd1 was of course not mounted.

  2. Now I realise it is a network mount, of course the network needs to be up to acheive a successful mount.
    I tried Automount, with this fstab entry, but it didn’t work. It just hung.
    UUID=42b83b11-aaaa-1234-bbbb-7e7743d09a19 /mnt/vol1 ext4 noauto,nofail,errors=remount-ro,x-systemd.requires=network-online.target,x-systemd.device-timeout=1,x-systemd.automount 0 2

  3. Since mount -a was working fine with the following fstab entry, I though I’d create a Service to run it during booting.
    UUID=42b83b11-aaaa-1234-bbbb-7e7743d09a19 /mnt/vol1 ext4 defaults,nofail,errors=remount-ro 0 2

  4. My Service in /etc/systemd/system/scw-mountall.service
    [Unit]
    Description=SCW Mount All
    DefaultDependencies=no
    After=syslog.target
    After=network.target

    [Service]
    Type=oneshot
    ExecStart=/bin/mount -a

    [Install]
    WantedBy=multi-user.target

Seems to work!
If you have other solutions, please let me know.
If Scaleway read this post, perhaps someone in the Development team can advise how Scaleway suggest an external volume is mounted.

John


#14

Please Scaleway, what’s the official way to do this (auto mount additional volumes) on Ubuntu 16.04 ?


#15

OK… 30 minutes later, the problem seems to be nobootwait option that doesn’t exist anymore on 16.04.
Seems to work without this option.

@Scaleway, doc should be updated.


#16

@Scaleway why don’t you guys update this documentation with Ubuntu 16.04, removing the “Size in Go”, etc.?


#17

+1000

@Scaleway : can you please update the doc ?


#18

I can’t make the service work…
So for the moment, I’m doing a manual mounting after each restart (hopefully it’s not often, but I’m sure to forget next time I will restart it !!!)


#19

With Centos 7 guide works fine.


#20

what is the link to the Guide