Learn about Centmin Mod LEMP Stack today
Become a Member

reliablesite.net E3-1230v3 server feedback

Discussion in 'Dedicated server hosting' started by Andy, Sep 17, 2014.

  1. Andy

    Andy Active Member

    540
    89
    28
    Aug 6, 2014
    Ratings:
    +132
    Local Time:
    10:12 PM
    @eva2000
    Got my dedi just now and I run the benchmark script right away. Here is the result, please let me know what you think about the results, server specs as well as bandwidth. Bummer they installed centos 6.5 on it while I instructed them to install version 7.
    Code:
    -------------------------------------------
    centminmodbench.sh 0.2
    http://bench.centminmod.com
    written by: George Liu (eva2000)
    http://centminmod.com
    -------------------------------------------
    
    -------------------------------------------
    System Information
    -------------------------------------------
    
    2.6.32-431.el6.x86_64
    
    CentOS release 6.5 (Final)
    
    ----------------------------------------------
    Architecture:          x86_64
    CPU op-mode(s):        32-bit, 64-bit
    Byte Order:            Little Endian
    CPU(s):                8
    On-line CPU(s) list:   0-7
    Thread(s) per core:    2
    Core(s) per socket:    4
    Socket(s):             1
    NUMA node(s):          1
    Vendor ID:             GenuineIntel
    CPU family:            6
    Model:                 60
    Stepping:              3
    CPU MHz:               800.000
    BogoMIPS:              6584.66
    Virtualization:        VT-x
    L1d cache:             32K
    L1i cache:             32K
    L2 cache:              256K
    L3 cache:              8192K
    NUMA node0 CPU(s):     0-7
    
    ----------------------------------------------
    CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE
    0   0    0      0    0:0:0:0       yes
    1   0    0      1    1:1:1:0       yes
    2   0    0      2    2:2:2:0       yes
    3   0    0      3    3:3:3:0       yes
    4   0    0      0    0:0:0:0       yes
    5   0    0      1    1:1:1:0       yes
    6   0    0      2    2:2:2:0       yes
    7   0    0      3    3:3:3:0       yes
    
    ----------------------------------------------
                 total       used       free     shared    buffers     cached
    Mem:         32068        943      31124          0         47        551
    Low:         32068        943      31124
    High:            0          0          0
    -/+ buffers/cache:        344      31724
    Swap:        15991          0      15991
    
    ----------------------------------------------
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/md2         42G  1.2G   39G   3% /
    tmpfs            16G     0   16G   0% /dev/shm
    /dev/md0        485M   39M  421M   9% /boot
    
    
    Code:
    -------------------------------------------
    disk ioping tests
    -------------------------------------------
    2014-09-16 21:15:26 URL:https://ioping.googlecode.com/files/ioping-0.6.tar.gz [6957/6957] -> "ioping-0.6.tar.gz" [1]
    Download done.
    ioping-0.6.tar.gz valid file.
    
    Running IOPing I/O benchmark...
    cc -std=c99 -g -Wall -Wextra -pedantic -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -DVERSION=\"0.6\" -c -o ioping.o ioping.c
    cc -o ioping ioping.o -std=c99 -g -Wall -Wextra -pedantic -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -lm
    
    IOPing I/O: ./ioping -c 10 .
    4096 bytes from . (ext4 /dev/md2): request=1 time=0.3 ms
    4096 bytes from . (ext4 /dev/md2): request=2 time=0.4 ms
    4096 bytes from . (ext4 /dev/md2): request=3 time=0.3 ms
    4096 bytes from . (ext4 /dev/md2): request=4 time=0.4 ms
    4096 bytes from . (ext4 /dev/md2): request=5 time=0.3 ms
    4096 bytes from . (ext4 /dev/md2): request=6 time=0.3 ms
    4096 bytes from . (ext4 /dev/md2): request=7 time=0.3 ms
    4096 bytes from . (ext4 /dev/md2): request=8 time=0.3 ms
    4096 bytes from . (ext4 /dev/md2): request=9 time=0.4 ms
    4096 bytes from . (ext4 /dev/md2): request=10 time=0.3 ms
    
    --- . (ext4 /dev/md2) ioping statistics ---
    10 requests completed in 9005.4 ms, 3018 iops, 11.8 mb/s
    min/avg/max/mdev = 0.3/0.3/0.4/0.0 ms
    
    IOPing seek rate: ./ioping -RD .
    
    --- . (ext4 /dev/md2) ioping statistics ---
    9493 requests completed in 3000.3 ms, 4435 iops, 17.3 mb/s
    min/avg/max/mdev = 0.2/0.2/0.8/0.0 ms
    
    IOPing sequential: ./ioping -RL .
    
    --- . (ext4 /dev/md2) ioping statistics ---
    3367 requests completed in 3000.2 ms, 1296 iops, 323.9 mb/s
    min/avg/max/mdev = 0.7/0.8/2.1/0.1 ms
    
    IOPing cached: ./ioping -RC .
    
    --- . (ext4 /dev/md2) ioping statistics ---
    50374 requests completed in 3000.0 ms, 472299 iops, 1844.9 mb/s
    min/avg/max/mdev = 0.0/0.0/0.0/0.0 ms
    
    
    Code:
    -------------------------------------------
    disk DD tests
    -------------------------------------------
    
    dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 8.19452 s, 131 MB/s
    
    dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 7.87942 s, 136 MB/s
    
    dd if=/dev/zero of=sb-io-test bs=1M count=1k oflag=dsync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 27.6187 s, 38.9 MB/s
    
    dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 256.316 s, 4.2 MB/s
    
    Code:
    -------------------------------------------
    disk FIO tests
    -------------------------------------------
    
    WARNING: certificate common name “www.github.com” doesn’t match requested host name “raw.githubusercontent.com”.
    2014-09-16 21:20:50 URL:https://raw.githubusercontent.com/Crowd9/Benchmark/master/fio-2.0.9.tar.gz [275092/275092] -> "fio-2.0.9.tar.gz" [1]
    Download done.
    fio-2.0.9.tar.gz valid file.
    
    Running FIO benchmark...
    
    FIO_VERSION = fio-2.0.9
    
    FIO random reads:
    randomreads: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    randomreads: Laying out IO file(s) (1 file(s) / 1024MB)
    
    randomreads: (groupid=0, jobs=1): err= 0: pid=30333: Tue Sep 16 21:21:19 2014
      read : io=1024.3MB, bw=77887KB/s, iops=19471 , runt= 13466msec
      cpu          : usr=8.89%, sys=46.17%, ctx=151212, majf=0, minf=89
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=262207/w=0/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
       READ: io=1024.3MB, aggrb=77887KB/s, minb=77887KB/s, maxb=77887KB/s, mint=13466msec, maxt=13466msec
    
    Disk stats (read/write):
        md2: ios=257440/3, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=131103/9, aggrmerge=0/0, aggrticks=426930/89, aggrin_queue=426990, aggrutil=$
      sdb: ios=130987/10, merge=0/0, ticks=426099/124, in_queue=426193, util=99.32%
      sda: ios=131220/9, merge=0/1, ticks=427761/55, in_queue=427788, util=99.32%
    
    FIO random writes:
    randomwrites: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    
    randomwrites: (groupid=0, jobs=1): err= 0: pid=30337: Tue Sep 16 21:23:15 2014
      write: io=1024.3MB, bw=9056.4KB/s, iops=2264 , runt=115812msec
      cpu          : usr=1.30%, sys=7.22%, ctx=193881, majf=0, minf=25
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=0/w=262207/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
      WRITE: io=1024.3MB, aggrb=9056KB/s, minb=9056KB/s, maxb=9056KB/s, mint=115812msec, maxt=115812msec
    
    Disk stats (read/write):
        md2: ios=0/264479, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/262367, aggrmerge=0/2114, aggrticks=0/4306881, aggrin_queue=4306821, aggru$
      sdb: ios=0/262373, merge=0/2108, ticks=0/4126854, in_queue=4126825, util=77.70%
      sda: ios=0/262361, merge=0/2120, ticks=0/4486908, in_queue=4486818, util=79.43%
    
    
    Code:
    
    -------------------------------------------
    Running bandwidth benchmark...
    -------------------------------------------
    
    ----------------------------------------------
    Download from Cachefly (http://cachefly.cachefly.net/100mb.test)
    Download Cachefly: 20.8MB/s
    
    -------------------------------------------
    USA bandwidth tests...
    -------------------------------------------
    ----------------------------------------------
    Download from Linode, Atlanta, GA, USA (http://speedtest.atlanta.linode.com/100MB-atlanta.bin)
    Download Linode, Atlanta, GA, USA: 74.2MB/s
    ----------------------------------------------
    Download from Linode, Dallas, TX, USA (http://speedtest.dallas.linode.com/100MB-dallas.bin)
    Download Linode, Dallas, TX, USA: 43.0MB/s
    ----------------------------------------------
    Download from Leaseweb, Manassas, VA, USA (http://mirror.us.leaseweb.net/speedtest/100mb.bin)
    Download Leaseweb, Manassas, VA, USA: 20.7MB/s
    ----------------------------------------------
    Download from Softlayer, Seattle, WA, USA (http://speedtest.sea01.softlayer.com/downloads/test100.zip)
    Download Softlayer, Seattle, WA, USA: 27.4MB/s
    ----------------------------------------------
    Download from Softlayer, San Jose, CA, USA (http://speedtest.sjc01.softlayer.com/downloads/test100.zip)
    Download Softlayer, San Jose, CA, USA: 23.3MB/s
    ----------------------------------------------
    Download from Softlayer, Washington, DC, USA (http://speedtest.wdc01.softlayer.com/downloads/test100.zip)
    Download Softlayer, Washington, DC, USA: 72.4MB/s
    ----------------------------------------------
    Download from VersaWeb, Las Vegas, Nevada (http://199.47.210.50/100mbtest.bin)
    Download VersaWeb, Las Vegas, Nevada: 18.1MB/s
    ----------------------------------------------
    Download from OVH, BHS, Canada (http://bhs.proof.ovh.net/files/100Mio.dat)
    Download OVH, BHS, Canada: 55.9MB/s
    ----------------------------------------------
    Download from Vultr, Los Angeles, California (http://lax-ca-us-ping.vultr.com/vultr.com.100MB.bin)
    Download Vultr, Los Angeles, California: 21.8MB/s
    ----------------------------------------------
    Download from Vultr, Seattle, Washington (http://wa-us-ping.vultr.com/vultr.com.100MB.bin)
    Download Vultr, Seattle, Washington: 19.0MB/s
    ----------------------------------------------
    Download from Vultr, Dallas, Texas (http://tx-us-ping.vultr.com/vultr.com.100MB.bin)
    Download Vultr, Dallas, Texas: 21.9MB/s
    ----------------------------------------------
    Download from Vultr, Chicago, Illinois (http://il-us-ping.vultr.com/vultr.com.100MB.bin)
    Download Vultr, Chicago, Illinois: 51.0MB/s
    ----------------------------------------------
    Download from Vultr, Atlanta, Georgia (http://ga-us-ping.vultr.com/vultr.com.100MB.bin)
    Download Vultr, Atlanta, Georgia: 30.4MB/s
    ----------------------------------------------
    Download from Vultr, Miami, Florida (http://fl-us-ping.vultr.com/vultr.com.100MB.bin)
    Download Vultr, Miami, Florida: 36.2MB/s
    ----------------------------------------------
    Download from Vultr, New York / New Jersey (http://nj-us-ping.vultr.com/vultr.com.100MB.bin)
    Download Vultr, New York / New Jersey: 112MB/s
    
    -------------------------------------------
    Asia bandwidth tests...
    -------------------------------------------
    
    ----------------------------------------------
    Download from Linode, Tokyo, JP (http://speedtest.tokyo.linode.com/100MB-tokyo.bin)
    Download Linode, Tokyo, JP: 10.3MB/s
    ----------------------------------------------
    Download from Softlayer, Singapore (http://speedtest.sng01.softlayer.com/downloads/test100.zip)
    Download Softlayer, Singapore: 7.07MB/s
    ----------------------------------------------
    Download from Vultr, Tokyo, Japan (http://hnd-jp-ping.vultr.com/vultr.com.100MB.bin)
    Download Vultr, Tokyo, Japan: 7.27MB/s
    
    -------------------------------------------
    Europe bandwidth tests...
    -------------------------------------------
    
    ----------------------------------------------
    Download from Linode, London, UK (http://speedtest.london.linode.com/100MB-london.bin)
    Download Linode, London, UK: 17.2MB/s
    ----------------------------------------------
    Download from OVH, Paris, France (http://proof.ovh.net/files/100Mio.dat)
    Download OVH, Paris, France: 24.1MB/s
    ----------------------------------------------
    Download from SmartDC, Rotterdam, Netherlands (http://mirror.i3d.net/100mb.bin)
    Download SmartDC, Rotterdam, Netherlands: 8.65MB/s
    ----------------------------------------------
    Download from Vultr, Amsterdam, Netherlands (http://ams-nl-ping.vultr.com/vultr.com.100MB.bin)
    Download Vultr, Amsterdam, Netherlands: 15.6MB/s
    ----------------------------------------------
    Download from Vultr, London, UK (http://lon-gb-ping.vultr.com/vultr.com.100MB.bin)
    Download Vultr, London, UK: 17.6MB/s
    ----------------------------------------------
    Download from Vultr, Paris, France (http://par-fr-ping.vultr.com/vultr.com.100MB.bin)
    Download Vultr, Paris, France: 16.2MB/s
    
    -------------------------------------------
    Australia bandwidth tests...
    -------------------------------------------
    
    ----------------------------------------------
    Download from Vultr, Sydney, Australia (http://syd-au-ping.vultr.com/vultr.com.100MB.bin)
    Download Vultr, Sydney, Australia: 6.93MB/s
    


     
  2. eva2000

    eva2000 Administrator Staff Member

    53,461
    12,128
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,668
    Local Time:
    1:12 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    with the exception of cachefly bandwidth the rest of the bandwidth locations look ok

    i know 64GB SSD performance will be much lower than larger SSDs i.e. 240-480GB SSDs, but does seem disk IOPs and random writes is kind of low but still much higher than SATA/SAS disks i.e. disk IOPs is 2000-3000 which is at least 20x times faster than non-SSD disks.

    what brand and model SSD you using for 64GB ?
     
  3. Andy

    Andy Active Member

    540
    89
    28
    Aug 6, 2014
    Ratings:
    +132
    Local Time:
    10:12 PM
    cat /proc/scsi/scsi
    Attached devices:
    Host: scsi0 Channel: 00 Id: 00 Lun: 00
    Vendor: ATA Model: SanDisk SDSSDP06 Rev: 3.1.
    Type: Direct-Access ANSI SCSI revision: 05
    Host: scsi1 Channel: 00 Id: 00 Lun: 00
    Vendor: ATA Model: SanDisk SDSSDP06 Rev: 3.1.
    Type: Direct-Access ANSI SCSI revision: 05
     
  4. eva2000

    eva2000 Administrator Staff Member

    53,461
    12,128
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,668
    Local Time:
    1:12 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    doesn't tell much

    try

    Code:
    yum -y install hdparm
    hdparm -I /dev/sda
     
  5. Andy

    Andy Active Member

    540
    89
    28
    Aug 6, 2014
    Ratings:
    +132
    Local Time:
    10:12 PM
    I just looked over the bandwidth test of this new dedi and my current rackspace server and the new dedi has very favourable results. A lot faster than RS in most cases. So it's a good news. All I need to know is whether the CPU/SSD are performing better than the current one then moving to this dedi makes a whole lot more sense. The result for the Rackspace cloud server is here. Please take a look, thank you, George.

    I think it's a lot more useful if you can highlight in the benchmark results the key numbers to look for, the bigger the better in some places and the smaller the better in other places ;)

    Previews - centminmodbench.sh - benchmark script for Centmin Mod LEMP servers | Page 3 | Centmin Mod Community
     
  6. Andy

    Andy Active Member

    540
    89
    28
    Aug 6, 2014
    Ratings:
    +132
    Local Time:
    10:12 PM
    /dev/sda:

    ATA device, with non-removable media
    Model Number: SanDisk SDSSDP064G
    Serial Number: 134449402736
    Firmware Revision: 3.1.0
    Transport: Serial, ATA8-AST, SATA Rev 2.6, SATA Rev 3.0
    Standards:
    Used: unknown (minor revision code 0x0110)
    Supported: 9 8 7 6 5
    Likely used: 9
    Configuration:
    Logical max current
    cylinders 16383 16383
    heads 16 16
    sectors/track 63 63
    --
    CHS current addressable sectors: 16514064
    LBA user addressable sectors: 123091920
    LBA48 user addressable sectors: 123091920
    Logical Sector size: 512 bytes
    Physical Sector size: 512 bytes
    Logical Sector-0 offset: 0 bytes
    device size with M = 1024*1024: 60103 MBytes
    device size with M = 1000*1000: 63023 MBytes (63 GB)
    cache/buffer size = unknown
    Form Factor: 1.8 inch
    Nominal Media Rotation Rate: Solid State Device
    Capabilities:
    LBA, IORDY(can be disabled)
    Queue depth: 32
    Standby timer values: spec'd by Standard, no device specific minimum
    R/W multiple sector transfer: Max = 1 Current = 1
    Advanced power management level: disabled
    DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
    Cycle time: min=120ns recommended=120ns
    PIO: pio0 pio1 pio2 pio3 pio4
    Cycle time: no flow control=120ns IORDY flow control=120ns
    Commands/features:
    Enabled Supported:
    * SMART feature set
    Security Mode feature set
    * Power Management feature set
    * Write cache
    * Look-ahead
    * Host Protected Area feature set
    * WRITE_BUFFER command
    * READ_BUFFER command
    * NOP cmd
    * DOWNLOAD_MICROCODE
    Advanced Power Management feature set
    SET_MAX security extension
    * 48-bit Address feature set
    * Device Configuration Overlay feature set
    * Mandatory FLUSH_CACHE
    * FLUSH_CACHE_EXT
    * SMART error logging
    * SMART self-test
    * General Purpose Logging feature set
    * 64-bit World wide name
    * WRITE_UNCORRECTABLE_EXT command
    * Segmented DOWNLOAD_MICROCODE
    * Gen1 signaling speed (1.5Gb/s)
    * Gen2 signaling speed (3.0Gb/s)
    * Gen3 signaling speed (6.0Gb/s)
    * Native Command Queueing (NCQ)
    * Phy event counters
    Device-initiated interface power management
    * Software settings preservation
    unknown 78[8]
    * SET MAX SETPASSWORD/UNLOCK DMA commands
    * DEVICE CONFIGURATION SET/IDENTIFY DMA commands
    * Data Set Management TRIM supported (limit 8 blocks)
    * Deterministic read data after TRIM
    Security:
    Master password revision code = 65534
    supported
    not enabled
    not locked
    frozen
    not expired: security count
    supported: enhanced erase
    2min for SECURITY ERASE UNIT. 2min for ENHANCED SECURITY ERASE UNIT.
    Logical Unit WWN Device Identifier: 5001b44a59640f70
    NAA : 5
    IEEE OUI : 001b44
    Unique ID : a59640f70
    Checksum: correct
     
  7. eva2000

    eva2000 Administrator Staff Member

    53,461
    12,128
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,668
    Local Time:
    1:12 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    you can check yourself for instance

     
  8. eva2000

    eva2000 Administrator Staff Member

    53,461
    12,128
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,668
    Local Time:
    1:12 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    looks to be this model SanDisk SDSSDP-064G-G25 2.5" 64GB SATA III Internal Solid State Drive (SSD) - Newegg.com with rated random read/write at 7,000 and 2,000 respectively pretty low for an SSD these days

    personally, I'd look at swapping for a larger better SSD ... ask them what options are available for brand and models. At the newegg price for that Sandisk, you could get a much better 128GB Crucial M550 SSD Crucial M550 CT128M550SSD1 2.5" 128GB SATA 6Gb/s MLC Internal Solid State Drive (SSD) - Newegg.com with random read/writes at 90,000 and 75,000 (newegg compare)

    I'll begin moving your posts to a dedicated hosting forum thread to continue this
     
  9. eva2000

    eva2000 Administrator Staff Member

    53,461
    12,128
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,668
    Local Time:
    1:12 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    wonder if @Matt and @Null could share what server specs TAZ uses too to compare.
     
  10. Andy

    Andy Active Member

    540
    89
    28
    Aug 6, 2014
    Ratings:
    +132
    Local Time:
    10:12 PM
    Thank you George. I have contacted them for the possibility/info about the 128GB SSD.
     
  11. Andy

    Andy Active Member

    540
    89
    28
    Aug 6, 2014
    Ratings:
    +132
    Local Time:
    10:12 PM
  12. eva2000

    eva2000 Administrator Staff Member

    53,461
    12,128
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,668
    Local Time:
    1:12 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Yeah basically the 64GB Sandisk SSD is too slow and from newegg listed specs, yes I'd expect it to be slow - really need a faster SSD in 128GB minimum or >240GB SSD range. As them what SSD models and capacities are available for upgrade. @RSNET-Radic if he drops by here maybe to shed more light hopefully.

    It's reasons like yours as to why centminmodbench.sh now exists. For my private paying clients I do all this verification of expected performance and more checks etc. So good to have it automated :D
     
  13. Andy

    Andy Active Member

    540
    89
    28
    Aug 6, 2014
    Ratings:
    +132
    Local Time:
    10:12 PM
    I did a "hdparm -t /dev/sda" and the result is
    Timing buffered disk reads 1318MB in 3 seconds = 438.84 MB/sec

    The writing speed is still about 132 MB/sec

    What speed do you have with your current server?
     
  14. eva2000

    eva2000 Administrator Staff Member

    53,461
    12,128
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,668
    Local Time:
    1:12 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Yeah writes will be slower on 64GB capacity SSDs usually however yours is below even it's rated speed of 390MB/s writes SanDisk SDSSDP-064G-G25 2.5" 64GB SATA III Internal Solid State Drive (SSD) - Newegg.com

    Linode 2GB VPS is expected to have faster disks as it's using more enterprise based SSD configurations

    Code:
    /dev/xvda:
    Timing buffered disk reads: 2142 MB in  3.00 seconds = 713.50 MB/sec
    To get that on dedicated server, you would probably need a 4x 256GB SSD raid 10 configuration at least and depends on SSD model and brand used and config setup.
     
  15. Andy

    Andy Active Member

    540
    89
    28
    Aug 6, 2014
    Ratings:
    +132
    Local Time:
    10:12 PM
    I got them to switch the 2x64GB out and put the 2x128GB in.
    Code:
    hdparm -t /dev/sda
    /dev/sda:
    Timing buffered disk reads: 1584 MB in  3.00 seconds = 527.93 MB/sec
    [root@ryan ~]# dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
    512+0 records in
    512+0 records out
    536870912 bytes (537 MB) copied, 2.97896 s, 180 MB/s
     
  16. eva2000

    eva2000 Administrator Staff Member

    53,461
    12,128
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,668
    Local Time:
    1:12 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    run the centminmodbench.sh you can edit it to turn off other tests besides the dd, ioping, disk fio and maybe bandwidth, axel and ping, mtr tests i.e.

    Code:
    SEVERBEAR='n'
    OPENSSLBENCH='n'
    OPENSSL_NONSYSTEM='n'
    RUN_DISKDD='y'
    RUN_DISKIOPING='y'
    RUN_DISKFIO='y'
    RUN_AXELBENCH='n'
    RUN_BANDWIDTHBENCH='n'
    RUN_VULTRTESTS='y'
    EUROPE_BANDWIDTHTESTS='y'
    ASIA_BANDWIDTHTESTS='y'
    AUSTRALIA_BANDWIDTHTESTS='y'
    USA_BANDWIDTHTESTS='y'
    RUN_PINGTESTS='n'
    RUN_MYSQLSLAP='n'
    RUN_PHPTESTS='n'
    RUN_UNIXBENCH='n'
    RUN_MTRTESTS='n'
    MTR_PACKETS='10'
    UNIXBENCH_VER='5.1.3'
     
  17. Matt

    Matt Well-Known Member

    925
    414
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +669
    Local Time:
    4:12 AM
    1.5.15
    MariaDB 10.2
    Code:
    root@ny1:~# hdparm -t /dev/sdc
    
    /dev/sdc:
    Timing buffered disk reads: 744 MB in  3.01 seconds = 247.44 MB/sec
    I'll raise this with Howard and @Null
     
  18. eva2000

    eva2000 Administrator Staff Member

    53,461
    12,128
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,668
    Local Time:
    1:12 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    TAZ using 64GB SSD too ?
     
  19. Matt

    Matt Well-Known Member

    925
    414
    63
    May 25, 2014
    Rotherham, UK
    Ratings:
    +669
    Local Time:
    4:12 AM
    1.5.15
    MariaDB 10.2
    Yeah for MySQL
     
  20. eva2000

    eva2000 Administrator Staff Member

    53,461
    12,128
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,668
    Local Time:
    1:12 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Better check the performance IOPs etc to see if it's up to par.. though I am pretty sure TAZ MySQL usage would not go anywhere near the limits with Xenforo MySQL usage.