Learn about Centmin Mod LEMP Stack today
Register Now

Test Server

Discussion in 'System Administration' started by Matt, Dec 4, 2018.

  1. Matt

    Matt Moderator Staff Member

    819
    360
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +549
    Local Time:
    3:58 AM
    1.5.15
    MariaDB 10.2
    Had a spare ProLiant DL380 G6 lying around at work, so I filled all 16 drive bays with some drives I also had lying around, and I've put it into the test rack for me to have a play about with

    20181203_155157.jpg

    One of the drives was DOA so got swapped out, and rebuilt the array
    upload_2018-12-4_13-48-43.png

    Only a low end CPU

    Code:
    model name      : Intel(R) Xeon(R) CPU           E5504  @ 2.00GHz
    but it's decent enough for testing with.

    Currently running CentOS 7.6

    Code:
    # cat /etc/redhat-release
    CentOS Linux release 7.6.1810 (Core)
    
     
    • Like Like x 2
  2. eva2000

    eva2000 Administrator Staff Member

    39,829
    8,788
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +13,526
    Local Time:
    12:58 PM
    Nginx 1.15.x
    MariaDB 5.5/10.x
    Nice.. what you're going to be testing ?? Lenovo mouse !
     
  3. Matt

    Matt Moderator Staff Member

    819
    360
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +549
    Local Time:
    3:58 AM
    1.5.15
    MariaDB 10.2
    Just put another 3 of them in the rack, so we are going to play about with openstack
     
    • Like Like x 1
  4. eva2000

    eva2000 Administrator Staff Member

    39,829
    8,788
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +13,526
    Local Time:
    12:58 PM
    Nginx 1.15.x
    MariaDB 5.5/10.x
    Nice 4x server setup for OpenStack ! Keep us updated on your adventures :D
     
  5. Matt

    Matt Moderator Staff Member

    819
    360
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +549
    Local Time:
    3:58 AM
    1.5.15
    MariaDB 10.2
    Undercloud node also has a failed drive, so currently rebuilding the array.

    upload_2018-12-6_16-1-43.png

    Have added a switch into the mix as well to connect them all over their additional NICs.
     
  6. Matt

    Matt Moderator Staff Member

    819
    360
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +549
    Local Time:
    3:58 AM
    1.5.15
    MariaDB 10.2
    All up and running now. Have set up some HA testing on there with 6 VMs with galara cluster, haproxy, gluster and nginx.
    20190227_144443.jpg 20190227_144427.jpg 20190214_103843.jpg
     
  7. eva2000

    eva2000 Administrator Staff Member

    39,829
    8,788
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +13,526
    Local Time:
    12:58 PM
    Nginx 1.15.x
    MariaDB 5.5/10.x
    sweet... ram yummy :D

    Interesting to see the disk I/O performance for your gluster nodes :)
     
    • Like Like x 1
  8. Matt

    Matt Moderator Staff Member

    819
    360
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +549
    Local Time:
    3:58 AM
    1.5.15
    MariaDB 10.2
    Its average at best TBH. We are running NFS storage for the servers (8TB RAID10 array, but it's on 7k drives). Using gluster4.1 which does seem faster than the last time I tried it on 3.7.
     
  9. Matt

    Matt Moderator Staff Member

    819
    360
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +549
    Local Time:
    3:58 AM
    1.5.15
    MariaDB 10.2
    Code:
    [[email protected] html]# gluster volume status
    
    Status of volume: gvol0
    
    Gluster process                             TCP Port  RDMA Port  Online  Pid
    
    ------------------------------------------------------------------------------
    
    Brick web-1:/mnt/gvol0/brick0/brick         49152     0          Y       9957
    
    Brick web-2:/mnt/gvol0/brick0/brick         49152     0          Y       9934
    
    Brick web-3:/mnt/gvol0/brick0/brick         49152     0          Y       9927
    
    Self-heal Daemon on localhost               N/A       N/A        Y       9980
    
    Self-heal Daemon on web-3                   N/A       N/A        Y       9950
    
    Self-heal Daemon on web-2                   N/A       N/A        Y       9957
    
     
    
    Task Status of Volume gvol0
    
    ------------------------------------------------------------------------------
    
    There are no active volume tasks
     
  10. eva2000

    eva2000 Administrator Staff Member

    39,829
    8,788
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +13,526
    Local Time:
    12:58 PM
    Nginx 1.15.x
    MariaDB 5.5/10.x
    3 brick setup so shouldn't be that bad disk I/O relatively speeding to type of drives used. I thought it was 5+ brick gluster - last time I did that disk I/O was pretty slow on 5x VPS system expected as you add more brick/servers to the gluster config.
     
  11. Matt

    Matt Moderator Staff Member

    819
    360
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +549
    Local Time:
    3:58 AM
    1.5.15
    MariaDB 10.2
    I'll do some testing on Monday when I'm back in the office to see what it's really like. It was pretty slow extracting the WP archive into the file system.
     
    • Like Like x 1
  12. eva2000

    eva2000 Administrator Staff Member

    39,829
    8,788
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +13,526
    Local Time:
    12:58 PM
    Nginx 1.15.x
    MariaDB 5.5/10.x
    Yeah IIRC on 5x VPS gluster setup the disk I/O with NFS was between 10-23MB/s depending on NFS tweaks setup. Of course it's relative to underlying disk I/O performance of each VPS/server. IIRC each VPS disk I/O was between 66-90MB/s so slow heh.
     
..