Profile Details

Toggle Sidebar
Loading cover... Drag cover to reposition
Recent updates
  • Tony Ellis
    Tony Ellis replied to a discussion, Forum Slow

    Yes, especially when posting a reply. I will time this one...

  • Tony Ellis
    Tony Ellis replied to a discussion, mirrorlist site down?

    No problem reaching using the chrome browser from Australia, so its up...

  • Have you considered using managed switch(es) for per port bandwidth control?

  • Tony Ellis
    Tony Ellis replied to a discussion, Memory leak?

    As well as the script Nick provided, do a google search using something like "linux finding source of memory leak" - you will get many really good hits, some using just the standard OS provided tools...

  • Tony Ellis
    Tony Ellis replied to a discussion, ClearOS 8


    ClearOS8 is on hold for the moment because of the tie up between IBM and RedHat. The decision is expected to be revisited in Q3

    Well Q3 has come and gone and still absolutely nothing, or did you mean Q3 2021 or maybe 2022 :)
    Redhat Version 8.3 is now in Beta - if ClearOS wait long enough and eventually decide to stay with CentOS they can jump straight to version 9 when it is released and miss out 8 altogether...

  • Flash, Why do you need support for the 8125 and why is the Old Bandwidth Manger so important ?
    [quote]Or has ClearOS finally come to their senses and has implemented the old Bandwidth Manager in ClearOS 7? :D
    Not going to happen as you appear to have already accepted... As newer hardware gets introduced you will find it increasingly difficult to run an old OS that lacks the necessary support and be stuck with your old hardware.

    Some choices for you...

    1) Run a more modern OS on the bare hardware that has support for the 8125 and your ClearOS 6.x in a VM
    2) Choose something different to the 8125 that your ClearOS 6.x does support
    and better
    3) Forget the Old Bandwidth Manger and find something else to replace it - even if it is running something other than ClearOS

  • Flash - doubt it out of the box. Read this thread here you posted in to see what was necessary to gain support for 7. Since EOL for CentOS 6 is November 30, 2020 would think it unlikely that elrepo would be interested in producing a driver for CentOS verison 6. You realize don't you that ClearOS 6 went EOL 1 Sep 2019 and is no longer supported. That is over a year ago. As time goes by becomes more and more a liability on the internet. Get yourself a copy of ClearOS 7.x and use the kmod discussed here...

  • Tony Ellis
    Tony Ellis replied to a discussion, Slow write speeds on drives

    Gabriel, Nothing to be sorry about - you had a problem, posted and solved it - Well done.

    The only waste is the forum search facility being so crappy that future posters with similar probems will likely not find this append and the help it might provide them...

  • Tony Ellis
    Tony Ellis replied to a discussion, Slow write speeds on drives

    Gabriel - way past my bedtime - but one thing to you to research re. your RAID.
    With multiple layers and parity RAID you must get the alignment, stripe size, block size correct - otherwise crossing boundaries cause split (extra) writes etc
    This sort of thing

  • Tony Ellis
    Tony Ellis replied to a discussion, Slow write speeds on drives

    Forgot to mention this...

    Discussing your claim of a drop in 4x and citing a RAID 5 e.g. "RAID5 with 4SSD: r/w 900/900 -> 900/, 250 MB/s" in your initial post. Thus on your RAID 5 the write speed would be about the read speed, 250 x 4 in your case. With parity RAID, e.g. RAID 5, we need to verify and re-write parity with every write that goes to disk. This means that a RAID 5 array will have to read the data, read the parity, write the data, and finally write the parity. Four operations for each effective one. This gives us a write penalty on RAID 5 of four, a quarter. This is at the disk hardware level with simple I/O. Thus to attain about the same write as read speed, we need to optimize writes using cache, aggregating writes, queue depth, force flush to disk time limit, whether asynchronous or synchronous writes, whole block writes etc techniques.

    Don't have an SSD RAID here, but this should not matter for the purpose of this discussion... Using a 3 disk RAID 5 with WD "RED" drives (none SMR) we get.
    r/w 234/205 using your fio script
    r/w 189/38 increasing file-size 10x

    Here we increased the file-size so all the software enhancements to limit the effect of writes on parity RAID have run out of resources. This shows the importance of these software enhancements and thus also the focus of where your performance may have changed such as a parameter change. Reads have also suffered, but by a much smaller percentage. There is much of interest on the web here, for example Understaanding RAID
    Repeated the above test on an SSD, not RAID. Not such a big discrepancy

    r/w 523/504 per your script
    r/w 527/348 using fio large file

    One of the reason so many stats are run here is to pick up when something goes awry. Able to notice a chance very quickly which makes problem solving so much easier and been very useful in the past. E.g. Main Server Stats