Expanding LSSD

#1

I know you can mount additional LSSD volumes to your C1 server, but is it possible to expand the storage of a current volume?

Like, say for example. You got your primary 50GB one, and say I want to addon an additional 150GB.

The server will basically see 2 seperate volumes. Is there any way to combine these volumes so the server sees 200GB on a single volume instead?

[ANSWERED] - Single volume of 500go+
Noob question about the volumes and how to use extra storage for website
#2

Hello @TechKat,

The more suitable solution is to create a new image with a bigger volume size. To do that, you have to rebuild an image from scratch with the desired volume size.

  • 1/ Create a new server - Click the Create server button and select the image-builder InstantApp in the Imagehub tab .

  • 2/ Configure the builder - In the terminal type the following: image-builder-configure and follow the configuration wizard.

  • 3/ Install git - apt-get install git

  • 4/ Clone the Scaleway repository of the image you want to build with a higher volume size. For instance to build Ubuntu 14.10 run git https://github.com/scaleway/image-ubuntu.git on the image-builder server.

You can retrieve all images at https://github.com/scaleway

  • 5/ Override the default volume size: Edit ubuntu/14.10/Makefile and add the following on the top of the file:

IMAGE_VOLUME_SIZE = 150G

150G represent the size of the root volume.

  • 6/ In ubuntu/Makefile edit VERSION to only build the desired version of Ubuntu. In my case I want to rebuild Ubuntu 14.10 so version looks like that

VERSIONS ?= 14.10

  • 7/ Build the new image running make image_on_local

  • 8/ Et voilà, you have a new image ready to use in your control panel.

2 Likes
How to distribute or share an image?
#3

@edouard I understand what you mean, but I don’t want to be messing around with creating images and things like that.

#4

Can you give us more information about the use case?

#5

@edouard Like for example, in a virtualization application, you can change the disk size for a virtual machine and normally the guest OS would be okay with it.

Since ScaleWay is physicalization, and uses NBD to connect them to their volumes, or “virtual disks”, is there not a way to like increase the disk size? You start off with 50GB, some users may want to increase that without having to add more volumes. I can see that isn’t possible, but what I was hoping for is, is it possible to join volumes together to make one single disk size?

Like, your default filesystem is /dev/nbd0, and adding another volume adds /dev/nbd1 to your system. Can it not just be added to the original /dev/nbd0? Doing my best to try and explain this the best I can.

Think of it like JBOD (Just a Bunch OF Disks). That particular RAID function just takes a bunch of hard drives and combines them into 1 drive. This is what I was asking about.

#6

Ok you should have a look a lvm that can allows you to abstract a bunch of a disks into one drive. Unfortunately I do not think that it can work with the default root volume nbd0 but it works fine with additional volumes > nb0

#7

I was afraid that would be the case. It should be an option to increase the nbd0 disk size.

#8

Hi TechKat and edouard

I would also expand the storage of the first volume ; Did you find a solution ?

For my part I tried LVM as edouard has suggested but … I 'm facing error, any idea ?

fdisk /dev/nbd1
Command (m for help): p

Disk /dev/nbd1: 50.0 GB, 49999998976 bytes
255 heads, 63 sectors/track, 6078 cylinders, total 97656248 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4e1fad2c

 Device Boot      Start         End      Blocks   Id  System

/dev/nbd1p1 2048 97656247 48827100 83 Linux

create Volume

root@testvol:~# pvcreate /dev/nbd1
Physical volume “/dev/nbd1” successfully created

root@testvol:~# vgcreate fileserver /dev/nbd1
Volume group “fileserver” successfully created

root@testvol:~# vgdisplay
— Volume group —
VG Name fileserver
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 46.56 GiB
PE Size 4.00 MiB
Total PE 11920
Alloc PE / Size 0 / 0
Free PE / Size 11920 / 46.56 GiB
VG UUID Hdx4pZ-r0ZS-6871-RLGa-MeDt-ntv5-LWBSBt

root@testvol:~# lvcreate --name Vol1 --size 40G fileserver
/dev/fileserver/Vol1: not found: device not cleared
Aborting. Failed to wipe start of new LV.
device-mapper: remove ioctl on failed: Device or resource busy
semid 425984: semop failed for cookie 0xd4d8738: incorrect semaphore state
Failed to set a proper state for notification semaphore identified by cookie value 223184696 (0xd4d8738) to initialize waiting for incoming notifications.

#9

Just tested and I’m hitting the same bug.

This seems to be a known issue: https://lists.debian.org/debian-user/2012/12/msg00407.html

The --noudevsync of lvcreate solves the issue, I was able to create my logical volume using that.
lvcreate --noudevsync --size 1G test works for me.

We need to investigate further to see why it’s still broken.

#10

Ok, but unfortunately when I Mounted with fstab after a reboot LV did not mounted

~# pvcreate /dev/nbd1
Physical volume “/dev/nbd1” successfully created
~# pvdisplay
“/dev/nbd1” is a new physical volume of “46.57 GiB”
— NEW Physical volume —
PV Name /dev/nbd1
VG Name
PV Size 46.57 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID Opz0v8-ZTjg-DsWR-ucr8-3HML-f3CN-HmHgdo

~# vgcreate fileserver /dev/nbd1
Volume group “fileserver” successfully created
~# lvcreate --noudevsync --name Vol1 --size 40G fileserver
Logical volume “Vol1” created
~# mkfs -t ext4 /dev/fileserver/Vol1

~# mkdir /var/www/documents
~# mount /dev/fileserver/Vol1 /var/www/documents
~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/nbd0 46G 6.1G 38G 15% /
none 1010M 4.0K 1010M 1% /dev
tmpfs 203M 7.4M 196M 4% /run
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 1012M 0 1012M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/mapper/fileserver-Vol1 40G 176M 38G 1% /var/www/documents

~# nano /etc/fstab
/dev/mapper/fileserver-Vol1 /var/www/documents ext4 defaults,nobootwait,errors=remount-ro 0 2

In order to make the device mappings available during boot.
~#update-initramfs -u

~# cd /var/www/documents
~# ls
lost+found

******************* HARD REBOOT *******************

root@testvol:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/nbd0 46G 6.1G 38G 15% /
none 1010M 4.0K 1010M 1% /dev
tmpfs 203M 7.4M 196M 4% /run
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 1012M 0 1012M 0% /run/shm
none 100M 0 100M 0% /run/user

root@testvol:~# lvdisplay
— Logical volume —
LV Path /dev/fileserver/Vol1
LV Name Vol1
VG Name fileserver
LV UUID YLIDzh-KyPV-lKjS-A0QC-5eNX-EBzv-e5dEv8
LV Write Access read/write
LV Creation host, time testvol, 2015-08-31 12:32:20 +0000
LV Status NOT available
LV Size 40.00 GiB
Current LE 10240
Segments 1
Allocation inherit
Read ahead sectors auto

#11

Hi @edouard,

I tried to build the new image according to your instructions but the make image failed:

ERROR: Access to bucket 'test-images' was denied
ERROR: S3 error: 403 (AccessDenied): Access Denied
ERROR: S3 error: 400 (TooManyBuckets): You have attempted to create more buckets than allowed

Any hints? Or is this related to the scale up of our SIS infrastructure?

TIA

2 Likes
#12

Did you find any solution to that @jaltek?

#13

So is there a way to do it ?

RROR: S3 error: 404 (Object Not Found)
s3cmd ls s3://test-images || s3cmd mb s3://test-images
ERROR: Access to bucket ‘test-images’ was denied
ERROR: S3 error: 403 (AccessDenied): Access Denied
ERROR: S3 error: 400 (TooManyBuckets): You have attempted to create more buckets than allowed
…/docker-rules.mk:165: recipe for target ‘image_on_s3’ failed
make: *** [image_on_s3] Error 11

#14

I need this too…

#15

Hello,

We did some changes on the image-builder. I just fixed the step 7.
You can find the complete image-builder configuration on GitHub https://github.com/scaleway/image-builder

Edouard

#16

Hi
I tried following the steps above for a Centos image. I am trying to increase the root partition to 150 GB. I get the error below:

FATA[0002] cannot execute ‘run’: failed to create server: StatusCode: 400, Type: invalid_request_error, APIMessage: The total volume size of VC1S instances must be equal or below 50GB

I intend to create a VC1M server once the image has been created. Any hints?

Regards
Alvin

#17

@alvinfitzgerald you can’t extend the storage on Starter Cloud Servers VC1, only works with BareMetal.

#18

i used Bare Metal C1 but still receive this error message

FATA[0002] cannot execute ‘run’: failed to create server: StatusCode: 400, Type: invalid_request_error, APIMessage: The total volume size of VC1S instances must be equal or below 50GB

I tried with multiple images, Ubuntu multi version. but still same error. i tried to increase the LSSD to 100GB instead of 50GB.
Please help. thanks

#19

wow 1 week already but no reply

#20

@edouard kindly help. i’m using C1 server with 50GB mounted to build the image but still receive The total volume size of VC1S instances must be equal or below 50GB error.