Want to subscribe to topics you're interested in?
Become a Member

OVH OVH Public Cloud Servers ?

Discussion in 'Virtual Private Server (VPS) hosting' started by eva2000, Dec 13, 2015.

  1. Matt

    Matt Well-Known Member

    929
    415
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +671
    Local Time:
    12:33 AM
    1.5.15
    MariaDB 10.2
    Finally got to test their "high performance" storage myself. Pretty much the same results as you @eva2000
    Code:
    [root@host ~]# mount | grep /dev/vdb1
    /dev/vdb1 on /mnt/db type xfs (rw,relatime,attr2,inode64,noquota)
    [root@host ~]# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/vda1       197G   33G  157G  18% /
    devtmpfs        3.4G     0  3.4G   0% /dev
    tmpfs           3.4G     0  3.4G   0% /dev/shm
    tmpfs           3.4G  281M  3.1G   9% /run
    tmpfs           3.4G     0  3.4G   0% /sys/fs/cgroup
    tmpfs           682M     0  682M   0% /run/user/1000
    tmpfs           1.0G     0  1.0G   0% /var/mysqltmp
    tmpfs           682M     0  682M   0% /run/user/0
    /dev/vdb1        10G   33M   10G   1% /mnt/db
    [root@host ~]# cd /mnt/db/
    [root@host db]# ls
    [root@host db]# ioping -c 10 .
    4 KiB from . (xfs /dev/vdb1): request=1 time=352 us
    4 KiB from . (xfs /dev/vdb1): request=2 time=868 us
    4 KiB from . (xfs /dev/vdb1): request=3 time=319 us
    4 KiB from . (xfs /dev/vdb1): request=4 time=315 us
    4 KiB from . (xfs /dev/vdb1): request=5 time=297 us
    4 KiB from . (xfs /dev/vdb1): request=6 time=316 us
    4 KiB from . (xfs /dev/vdb1): request=7 time=254 us
    4 KiB from . (xfs /dev/vdb1): request=8 time=312 us
    4 KiB from . (xfs /dev/vdb1): request=9 time=276 us
    4 KiB from . (xfs /dev/vdb1): request=10 time=360 us
    
    --- . (xfs /dev/vdb1) ioping statistics ---
    10 requests completed in 9.01 s, 2.73 k iops, 10.6 MiB/s
    min/avg/max/mdev = 254 us / 366 us / 868 us / 169 us
    [root@host db]# ioping -RD . 
    
    --- . (xfs /dev/vdb1) ioping statistics ---
    3.31 k requests completed in 3.00 s, 1.11 k iops, 4.36 MiB/s
    min/avg/max/mdev = 78 us / 896 us / 116.0 ms / 2.90 ms
    [root@host db]# ioping -RL .
    
    --- . (xfs /dev/vdb1) ioping statistics ---
    1.48 k requests completed in 3.00 s, 511 iops, 127.9 MiB/s
    min/avg/max/mdev = 1.46 ms / 1.95 ms / 11.3 ms / 408 us
    [root@host db]# ioping -RC .
    
    --- . (xfs /dev/vdb1) ioping statistics ---
    2.88 M requests completed in 3.00 s, 1.14 M iops, 4.35 GiB/s
    min/avg/max/mdev = 0 us / 0 us / 37.8 ms / 23 us
    [root@host db]# dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 1.94348 s, 552 MB/s
    [root@host db]# dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 1.75944 s, 610 MB/s
    [root@host db]# dd if=/dev/zero of=sb-io-test bs=1M count=1k oflag=dsync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 15.1472 s, 70.9 MB/s
    [root@host db]# dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 93.8036 s, 11.4 MB/s
    [root@host db]# 


     
  2. Matt

    Matt Well-Known Member

    929
    415
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +671
    Local Time:
    12:33 AM
    1.5.15
    MariaDB 10.2
    Read this on a site:
    So:
    Code:
    mount -o nobarrier /dev/vdb1 /mnt/db
    Code:
    [root@host /]# mount | grep vdb1
    /dev/vdb1 on /mnt/db type xfs (rw,relatime,attr2,nobarrier,inode64,noquota)
    [root@host /]#
    Code:
    [root@host db]# dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 10.6481 s, 101 MB/s
    [root@host db]# dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 1.68721 s, 636 MB/s
    [root@host db]# dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 1.82256 s, 589 MB/s
    [root@host db]# dd if=/dev/zero of=sb-io-test bs=1M count=1k oflag=dsync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 3.03999 s, 353 MB/s
    [root@host db]# dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 10.6453 s, 101 MB/s
    [root@host db]# 
     
  3. eva2000

    eva2000 Administrator Staff Member

    54,394
    12,198
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,763
    Local Time:
    10:33 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    sweet :D improved dd dsync tests it seems
     
  4. Matt

    Matt Well-Known Member

    929
    415
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +671
    Local Time:
    12:33 AM
    1.5.15
    MariaDB 10.2
    "should" be OK with the nobarrier option enabled, given the triple write setup they have for their storage.
     
  5. eva2000

    eva2000 Administrator Staff Member

    54,394
    12,198
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,763
    Local Time:
    10:33 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    really depends on what they mean triple replication HA, would file system corruption be also replicated across all 3 replicated copies ?
     
  6. Matt

    Matt Well-Known Member

    929
    415
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +671
    Local Time:
    12:33 AM
    1.5.15
    MariaDB 10.2
    Good question. I've set it up with nobarrier, and moved MySQL over to it (purely on my own server running my site). From what I can find, it would only cause corruption on a single drive failure.

    Even percona sets nobarrier on their guide to XFS
    Setting up XFS on Hardware RAID - the simple edition - MySQL Performance Blog

    On a side note, would it be possible to have an option set in centmin.sh for the MySQL socket location? Currently it's hard set.
     
  7. eva2000

    eva2000 Administrator Staff Member

    54,394
    12,198
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,763
    Local Time:
    10:33 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    well they're using hardware raid with BBU enabled so you're protected from such events that would cause corruption so that makes sense

    yes my.cnf has specifically set socket=/var/lib/mysql/mysql.sock to that location should be that hard to manually or sed replacement change it on your end ?

    If you're using centmin mod 123.09beta01, it has a new inc/z_custom.inc include file support which is open ended centminmod/centmin.sh at 123.09beta01 · centminmod/centminmod · GitHub and not tracked by git. So experienced folks can script their own bash shell commands via functions in a manually created file at inc/z_custom.inc and you can probably invoke those custom functions. Although it's not tied to any existing called function call.

    Might want to add to discussion in beta thread at Beta Branch - Centmin Mod .09 beta branch Testing | Centmin Mod Community :)
     
  8. Matt

    Matt Well-Known Member

    929
    415
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +671
    Local Time:
    12:33 AM
    1.5.15
    MariaDB 10.2
    I changed the my.cnf config to use the new socket location I set on the new drive, but websites were failing to load with "an unexpected error occured", and mysql command line didn't work either

    Code:
    [root@host databases]# mysql -u root
    ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2 "No such file or directory")
    [root@host databases]#
     
  9. eva2000

    eva2000 Administrator Staff Member

    54,394
    12,198
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,763
    Local Time:
    10:33 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    okay in your my.cnf add a section for [client] and specifically set socket custom option there too

    Code:
    mysqladmin ver
    mysqladmin  Ver 9.1 Distrib 10.0.23-MariaDB, for Linux on x86_64
    Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
    
    Server version          10.0.23-MariaDB
    Protocol version        10
    Connection              Localhost via UNIX socket
    UNIX socket             /home/mysql/mysql.sock
    Uptime:                 55 sec
    
    Threads: 1  Questions: 1  Slow queries: 0  Opens: 0  Flush tables: 1  Open tables: 63  Queries per second avg: 0.018
     
  10. Matt

    Matt Well-Known Member

    929
    415
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +671
    Local Time:
    12:33 AM
    1.5.15
    MariaDB 10.2
    Yeah, literally just figured [client] wasn't set anywhere :oops:
     
  11. eva2000

    eva2000 Administrator Staff Member

    54,394
    12,198
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,763
    Local Time:
    10:33 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
  12. Matt

    Matt Well-Known Member

    929
    415
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +671
    Local Time:
    12:33 AM
    1.5.15
    MariaDB 10.2
    Code:
    sed -i 's|/var/lib/mysql/|/mnt/db/mysql/|' php_configure.inc
    and rebuild PHP has fixed it.
     
  13. eva2000

    eva2000 Administrator Staff Member

    54,394
    12,198
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,763
    Local Time:
    10:33 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    ah totally forgot about PHP configured path heh

    will see about making a variable for this in 123.09beta01
     
  14. eva2000

    eva2000 Administrator Staff Member

    54,394
    12,198
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,763
    Local Time:
    10:33 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    So @Matt how's your production live experience with OVH Public cloud been ? You running any local backup processes on the servers ? How's the limited IOPs been for that ?

    OVH NEWS | THE LATEST ON IT INNOVATIONS AND TRENDS - OVH

     
    Last edited: Jan 22, 2016
  15. Matt

    Matt Well-Known Member

    929
    415
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +671
    Local Time:
    12:33 AM
    1.5.15
    MariaDB 10.2
    Takes 38 minutes for the cPanel backup to complete, and send the files off server when it's done.
     
  16. eva2000

    eva2000 Administrator Staff Member

    54,394
    12,198
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,763
    Local Time:
    10:33 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    that's for how much data backed up ?
     
  17. eva2000

    eva2000 Administrator Staff Member

    54,394
    12,198
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,763
    Local Time:
    10:33 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Got this email EG instances will be renamed HG and new E5-2650v3 EG VPS offering will be available
     
  18. Matt

    Matt Well-Known Member

    929
    415
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +671
    Local Time:
    12:33 AM
    1.5.15
    MariaDB 10.2
    The IO on my Cloud VPS is in the toilet for the last 2 weeks. I've got a ticket open with them currently.
     
  19. eva2000

    eva2000 Administrator Staff Member

    54,394
    12,198
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,763
    Local Time:
    10:33 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    ouch, let us know how you go :)
     
  20. Matt

    Matt Well-Known Member

    929
    415
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +671
    Local Time:
    12:33 AM
    1.5.15
    MariaDB 10.2
    They never fixed it, so the cloud VPS is gone. Moved over to Ramnode now.