Welcome to Centmin Mod Community
Become a Member

Linode Linode Block Storage - Early Access

Discussion in 'Virtual Private Server (VPS) hosting' started by eva2000, Jun 15, 2017.

  1. eva2000

    eva2000 Administrator Staff Member

    54,901
    12,240
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,811
    Local Time:
    12:29 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Linode.com are finally chasing Vultr.com's and DigitalOcean's block storage offerings https://forum.linode.com/viewtopic.php?f=26&t=14906 :)

    Will be free during beta testing and only available for beta period at Newark, New Jersey datacenter. Triple replication like OVH's ceph is sort of worrying as OVH's ceph was very very slow for write speeds so interesting to see how Linode's block storage fairs performance wise :)


     
  2. eva2000

    eva2000 Administrator Staff Member

    54,901
    12,240
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,811
    Local Time:
    12:29 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Just did a quick $5 Linode VPS test with 20GB block storage mounted as /home/nginx and block storage is definitely slower ~1/5th the speed of local SSD.
    Code (Text):
    ---------------------------------------------------------------------------
    Total Curl Installer YUM or DNF Time: 111.7093 seconds
    Total YUM Time: 12.132566469 seconds
    Total YUM or DNF + Source Download Time: 40.5412
    Total Nginx First Time Install Time: 409.5826
    Total PHP First Time Install Time: 719.5636
    Download Zip From Github Time: 3.7432
    Total Time Other eg. source compiles: 391.3836
    Total Centmin Mod Install Time: 1561.0710
    ---------------------------------------------------------------------------
    Total Install Time (curl yum + cm install + zip download): 1676.5235 seconds
    ---------------------------------------------------------------------------
    

    Code (Text):
    df -hT
    Filesystem     Type      Size  Used Avail Use% Mounted on
    /dev/root      ext4       20G  5.3G   13G  29% /
    devtmpfs       devtmpfs  493M  4.0K  493M   1% /dev
    tmpfs          tmpfs     495M     0  495M   0% /dev/shm
    tmpfs          tmpfs     495M  7.7M  488M   2% /run
    tmpfs          tmpfs     495M     0  495M   0% /sys/fs/cgroup
    tmpfs          tmpfs      99M     0   99M   0% /run/user/0
    /dev/sdc       ext4       20G   45M   19G   1% /home/nginx
    /dev/loop0     ext4      976M  2.6M  907M   1% /tmp
    

    Code (Text):
    ls -lah /home/nginx/domains/
    total 12K
    drwxr-s--- 3 nginx nginx 4.0K Jul 23 00:09 .
    drwxr-sr-x 4 nginx nginx 4.0K Jul 23 00:09 ..
    drwxr-x--- 6 nginx nginx 4.0K Jul 23 00:09 demodomain.com
    


    Running Intel Xeon E5-2697v4 @2.30Ghz Broadwell-EP based cpu
    Code (Text):
    cat /proc/cpuinfo
    processor       : 0
    vendor_id       : GenuineIntel
    cpu family      : 6
    model           : 79
    model name      : Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
    stepping        : 1
    microcode       : 0x1
    cpu MHz         : 2299.992
    cache size      : 16384 KB
    physical id     : 0
    siblings        : 1
    core id         : 0
    cpu cores       : 1
    apicid          : 0
    initial apicid  : 0
    fpu             : yes
    fpu_exception   : yes
    cpuid level     : 13
    wp              : yes
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat
    bugs            :
    bogomips        : 4601.65
    clflush size    : 64
    cache_alignment : 64
    address sizes   : 40 bits physical, 48 bits virtual
    power management:
    


    dd Benchmarks

    test bandwidth MB/s
    dd if=/dev/zero of=sb-io-test bs=128k count=1k conv=fdatasync - local ssd 536.0
    dd if=/dev/zero of=sb-io-test bs=8k count=16k conv=fdatasync - local ssd 521.0
    dd if=/dev/zero of=sb-io-test bs=128k count=1k oflag=dsync - local ssd 106.0
    dd if=/dev/zero of=sb-io-test bs=8k count=16k oflag=dsync - local ssd 13.6
    dd if=/dev/zero of=sb-io-test bs=128k count=1k conv=fdatasync - block storage 392.0
    dd if=/dev/zero of=sb-io-test bs=8k count=16k conv=fdatasync - block storage 565.0
    dd if=/dev/zero of=sb-io-test bs=128k count=1k oflag=dsync - block storage 24.8
    dd if=/dev/zero of=sb-io-test bs=8k count=16k oflag=dsync - block storage 1.6


    FIO Benchmarks

    test bandwidth KB/s IOPs
    fio read - local ssd 272635 68158
    fio writes - local ssd 244882 61220
    fio read - block storage 54265 14476
    fio writes - block storage 45667 11419


    ioping Benchmarks

    test bandwidth MiB/s IOPs
    ioping default - local ssd 12.6 3230
    ioping seek - local ssd 46.0 11800
    ioping sequential - local ssd 848.6 3390
    ioping sequential cached - local ssd 4170 17100
    ioping default - block storage 10.1 2570
    ioping seek - block storage 6.96 1780
    ioping sequential - block storage 162.2 608
    ioping sequential cached - block storage 4030 16500


    Linode Local SSD Benchmarks



    FIO reads
    Code (Text):
    ./fio reads.ini
    randomreads: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    randomreads: Laying out IO file(s) (1 file(s) / 1024MB)
    Jobs: 1 (f=1): [r] [100.0% done] [274.9M/0K /s] [70.4K/0  iops] [eta 00m:00s]
    randomreads: (groupid=0, jobs=1): err= 0: pid=27590: Sun Jul 23 01:30:25 2017
      read : io=1024.3MB, bw=272635KB/s, iops=68158 , runt=  3847msec
      cpu          : usr=6.14%, sys=71.22%, ctx=12252, majf=0, minf=70
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=262207/w=0/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
       READ: io=1024.3MB, aggrb=272635KB/s, minb=272635KB/s, maxb=272635KB/s, mint=3847msec, maxt=3847msec
    
    Disk stats (read/write):
      sda: ios=262014/0, merge=0/0, ticks=74153/0, in_queue=74060, util=85.58%
    


    FIO writes
    Code (Text):
    ./fio writes.ini
    randomwrites: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    Jobs: 1 (f=1): [w] [100.0% done] [0K/267.6M /s] [0 /68.6K iops] [eta 00m:00s]
    randomwrites: (groupid=0, jobs=1): err= 0: pid=27595: Sun Jul 23 01:30:44 2017
      write: io=1024.3MB, bw=244882KB/s, iops=61220 , runt=  4283msec
      cpu          : usr=4.97%, sys=77.67%, ctx=17501, majf=0, minf=5
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=0/w=262207/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
      WRITE: io=1024.3MB, aggrb=244881KB/s, minb=244881KB/s, maxb=244881KB/s, mint=4283msec, maxt=4283msec
    
    Disk stats (read/write):
      sda: ios=0/252978, merge=0/2, ticks=0/57120, in_queue=57167, util=88.29%
    


    Code (Text):
    ioping -c 5 /
    4 KiB <<< / (ext4 /dev/root): request=1 time=104.5 us (warmup)
    4 KiB <<< / (ext4 /dev/root): request=2 time=282.9 us
    4 KiB <<< / (ext4 /dev/root): request=3 time=404.5 us
    4 KiB <<< / (ext4 /dev/root): request=4 time=261.2 us
    4 KiB <<< / (ext4 /dev/root): request=5 time=290.3 us
    
    --- / (ext4 /dev/root) ioping statistics ---
    4 requests completed in 1.24 ms, 16 KiB read, 3.23 k iops, 12.6 MiB/s
    generated 5 requests in 4.00 s, 20 KiB, 1 iops, 5.00 KiB/s
    min/avg/max/mdev = 261.2 us / 309.7 us / 404.5 us / 55.8 us
    

    Code (Text):
    ioping -R /
    
    --- / (ext4 /dev/root) ioping statistics ---
    33.5 k requests completed in 2.84 s, 130.9 MiB read, 11.8 k iops, 46.0 MiB/s
    generated 33.5 k requests in 3.00 s, 130.9 MiB, 11.2 k iops, 43.6 MiB/s
    min/avg/max/mdev = 58.6 us / 84.8 us / 3.39 ms / 49.6 us
    

    Code (Text):
    ioping -RL /
    
    --- / (ext4 /dev/root) ioping statistics ---
    9.28 k requests completed in 2.73 s, 2.27 GiB read, 3.39 k iops, 848.6 MiB/s
    generated 9.28 k requests in 3.00 s, 2.27 GiB, 3.09 k iops, 773.3 MiB/s
    min/avg/max/mdev = 216.7 us / 294.6 us / 3.71 ms / 76.9 us
    

    Code (Text):
    ioping -RLC /
    
    --- / (ext4 /dev/root) ioping statistics ---
    51.2 k requests completed in 2.99 s, 12.5 GiB read, 17.1 k iops, 4.17 GiB/s
    generated 51.2 k requests in 3.00 s, 12.5 GiB, 17.1 k iops, 4.17 GiB/s
    min/avg/max/mdev = 46.7 us / 58.5 us / 336.6 us / 5.47 us
    


    Code (Text):
    dd if=/dev/zero of=sb-io-test bs=128k count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    134217728 bytes (134 MB) copied, 0.250513 s, 536 MB/s
    

    Code (Text):
    dd if=/dev/zero of=sb-io-test bs=8k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    134217728 bytes (134 MB) copied, 0.257688 s, 521 MB/s
     

    Code (Text):
    dd if=/dev/zero of=sb-io-test bs=128k count=1k oflag=dsync
    1024+0 records in
    1024+0 records out
    134217728 bytes (134 MB) copied, 1.26446 s, 106 MB/s
    

    Code (Text):
    dd if=/dev/zero of=sb-io-test bs=8k count=16k oflag=dsync
    16384+0 records in
    16384+0 records out
    134217728 bytes (134 MB) copied, 9.86406 s, 13.6 MB/s
    


    Linode Block Storage Benchmarks



    FIO reads
    Code (Text):
    ./fio reads.ini
    randomreads: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    randomreads: Laying out IO file(s) (1 file(s) / 1024MB)
    Jobs: 1 (f=1): [r] [100.0% done] [54265K/0K /s] [13.6K/0  iops] [eta 00m:00s]
    randomreads: (groupid=0, jobs=1): err= 0: pid=27618: Sun Jul 23 01:32:02 2017
      read : io=1024.3MB, bw=57908KB/s, iops=14476 , runt= 18112msec
      cpu          : usr=5.46%, sys=33.71%, ctx=102866, majf=0, minf=70
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=262207/w=0/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
       READ: io=1024.3MB, aggrb=57907KB/s, minb=57907KB/s, maxb=57907KB/s, mint=18112msec, maxt=18112msec
    
    Disk stats (read/write):
      sdc: ios=261316/2, merge=0/1, ticks=1121087/20, in_queue=1121543, util=99.34%
    


    FIO writes
    Code (Text):
    ./fio writes.ini       
    randomwrites: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    Jobs: 1 (f=1): [w] [100.0% done] [0K/45594K /s] [0 /11.4K iops] [eta 00m:00s]
    randomwrites: (groupid=0, jobs=1): err= 0: pid=27622: Sun Jul 23 01:32:44 2017
      write: io=1024.3MB, bw=45677KB/s, iops=11419 , runt= 22962msec
      cpu          : usr=3.42%, sys=19.32%, ctx=69312, majf=0, minf=5
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=0/w=262207/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
      WRITE: io=1024.3MB, aggrb=45676KB/s, minb=45676KB/s, maxb=45676KB/s, mint=22962msec, maxt=22962msec
    
    Disk stats (read/write):
      sdc: ios=0/261895, merge=0/5, ticks=0/1409780, in_queue=1409714, util=99.04%
    


    Code (Text):
    ioping -c 5 /home/nginx
    4 KiB <<< /home/nginx (ext4 /dev/sdc): request=1 time=224.7 us (warmup)
    4 KiB <<< /home/nginx (ext4 /dev/sdc): request=2 time=398.3 us
    4 KiB <<< /home/nginx (ext4 /dev/sdc): request=3 time=379.6 us
    4 KiB <<< /home/nginx (ext4 /dev/sdc): request=4 time=411.3 us
    4 KiB <<< /home/nginx (ext4 /dev/sdc): request=5 time=364.2 us
    
    --- /home/nginx (ext4 /dev/sdc) ioping statistics ---
    4 requests completed in 1.55 ms, 16 KiB read, 2.57 k iops, 10.1 MiB/s
    generated 5 requests in 4.00 s, 20 KiB, 1 iops, 5.00 KiB/s
    min/avg/max/mdev = 364.2 us / 388.4 us / 411.3 us / 17.9 us
    

    Code (Text):
    ioping -R /home/nginx
    
    --- /home/nginx (ext4 /dev/sdc) ioping statistics ---
    5.27 k requests completed in 2.96 s, 20.6 MiB read, 1.78 k iops, 6.96 MiB/s
    generated 5.27 k requests in 3.00 s, 20.6 MiB, 1.76 k iops, 6.86 MiB/s
    min/avg/max/mdev = 79.3 us / 561.0 us / 9.59 ms / 560.2 us
    

    Code (Text):
    ioping -RL /home/nginx
    
    --- /home/nginx (ext4 /dev/sdc) ioping statistics ---
    1.79 k requests completed in 2.93 s, 446.8 MiB read, 608 iops, 152.2 MiB/s
    generated 1.79 k requests in 3.00 s, 447 MiB, 595 iops, 148.9 MiB/s
    min/avg/max/mdev = 312.8 us / 1.64 ms / 23.3 ms / 1.02 ms
    

    Code (Text):
    ioping -RLC /home/nginx
    
    --- /home/nginx (ext4 /dev/sdc) ioping statistics ---
    49.4 k requests completed in 2.99 s, 12.1 GiB read, 16.5 k iops, 4.03 GiB/s
    generated 49.4 k requests in 3.00 s, 12.1 GiB, 16.5 k iops, 4.02 GiB/s
    min/avg/max/mdev = 47.1 us / 60.6 us / 415.6 us / 7.91 us
    


    Code (Text):
    dd if=/dev/zero of=sb-io-test bs=128k count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    134217728 bytes (134 MB) copied, 0.342264 s, 392 MB/s
    

    Code (Text):
    dd if=/dev/zero of=sb-io-test bs=8k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    134217728 bytes (134 MB) copied, 0.237358 s, 565 MB/s
     

    Code (Text):
    dd if=/dev/zero of=sb-io-test bs=128k count=1k oflag=dsync
    1024+0 records in
    1024+0 records out
    134217728 bytes (134 MB) copied, 5.42164 s, 24.8 MB/s
    

    Code (Text):
    dd if=/dev/zero of=sb-io-test bs=8k count=16k oflag=dsync
    16384+0 records in
    16384+0 records out
    134217728 bytes (134 MB) copied, 82.1537 s, 1.6 MB/s
     
    Last edited: Jul 23, 2017
  3. eva2000

    eva2000 Administrator Staff Member

    54,901
    12,240
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,811
    Local Time:
    12:29 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Fairly easy to add more block storage. Added a 2nd 20GB block storage device mounted as /backup directory. Literally just a few seconds between adding the block storage volume in Linode control panel and having it attached to Linode VPS !
    Code (Text):
    df -hT
    Filesystem     Type      Size  Used Avail Use% Mounted on
    /dev/root      ext4       20G  5.3G   13G  29% /
    devtmpfs       devtmpfs  493M     0  493M   0% /dev
    tmpfs          tmpfs     495M     0  495M   0% /dev/shm
    tmpfs          tmpfs     495M   14M  481M   3% /run
    tmpfs          tmpfs     495M     0  495M   0% /sys/fs/cgroup
    /dev/loop0     ext4      976M  2.6M  907M   1% /tmp
    /dev/sdc       ext4       20G  181M   19G   1% /home/nginx
    tmpfs          tmpfs      99M     0   99M   0% /run/user/0
    /dev/sdd       ext4       20G   45M   19G   1% /backup
    


    upload_2017-7-25_6-51-38.png
     
  4. eva2000

    eva2000 Administrator Staff Member

    54,901
    12,240
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,811
    Local Time:
    12:29 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Linode Fremont, California is next Linode block storage target ~4-5 weeks out Linode Forum :: Linode Block Storage (beta)

     
  5. eva2000

    eva2000 Administrator Staff Member

    54,901
    12,240
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,811
    Local Time:
    12:29 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Doesn't inspire confidence but the test Linode VPS with block storage just got an emergency email for raid failure needing VPS data migration. Good thing is Linode so far has had a flawless track record when it comes to such migrations - one of benefits of Linode cloud based VPS and their setups
    upload_2017-8-6_8-16-31.png

    upload_2017-8-6_8-17-10.png

    upload_2017-8-6_8-19-27.png

    migration complete automated Linode VPS migration to new Linode host KVM node server in ~3 minutes 40 seconds :)

    upload_2017-8-6_8-21-47.png
     
    Last edited: Aug 6, 2017
  6. Matt

    Matt Well-Known Member

    932
    415
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +671
    Local Time:
    2:29 PM
    1.5.15
    MariaDB 10.2
    Just started playing with this myself. Added an initial 20GB block to one of my VPS to use for database backup storage prior to shipping off to my backup machine.
     
  7. eva2000

    eva2000 Administrator Staff Member

    54,901
    12,240
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,811
    Local Time:
    12:29 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Guess that's one usage case that would work fine even if the disk i/o performance is much lower on block storage. Though if you're remote backup can do 100+MB/s, then the block storage would be limiting transfer speeds.
     
  8. buik

    buik “The best traveler is one without a camera.”

    2,027
    524
    113
    Apr 29, 2016
    Flanders
    Ratings:
    +1,675
    Local Time:
    3:29 PM
    Exactly my opinion.
    Is this block storage already available in the UK?
     
  9. Matt

    Matt Well-Known Member

    932
    415
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +671
    Local Time:
    2:29 PM
    1.5.15
    MariaDB 10.2
    No, this is on a VPS in Newark.
     
  10. eva2000

    eva2000 Administrator Staff Member

    54,901
    12,240
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,811
    Local Time:
    12:29 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Linode now also has Linode Block Storage non-ssd beta in Fremont, CA for US West Coast. Linode just notified me of this forum's Fremont, CA VPS needing host node migration for future new features and performance so I suspect it's related to Linode Block Storage :)

    So planning when I'll manually enter the migration queue :)

    linode-migration-queue-nov13-2017.png

    forums migrated

    upload_2017-11-6_17-54-54.png
     
    Last edited: Nov 6, 2017