Forums

Resolved
1 votes
http://www.snapraid.it/index.jpg


Has anyone fiddled with SnapRAID?

a quote from the SnapRAID site:


SnapRAID is a backup program for disk arrays. It stores parity information of your data and it recovers from up to six disk failures.

SnapRAID is mainly targeted for a home media center, with a lot of big files that rarely change.

Beside the ability to recover from disk failures, other features of SnapRAID are:

All your data is hashed to ensure data integrity and to avoid silent corruption.
If the failed disks are too many to allow a recovery, you lose the data only on the failed disks. All the data in the other disks is safe.
If you accidentally delete some files in a disk, you can recover them.
You can start with already filled disks.
The disks can have different sizes.
You can add disks at any time.
It doesn't lock-in your data. You can stop using SnapRAID at any time without the need to reformat or move data.
To access a file, a single disk needs to spin, saving power and producing less noise.


Link to the SnapRAID site



HOW-TO install SnapRAID 11.0 on ClearOS 7.x
Saturday, March 12 2016, 08:30 AM
Like
1
Share this post:
Responses (10)
  • Accepted Answer

    Sunday, March 13 2016, 07:08 AM - #Permalink
    Resolved
    0 votes
    I asked this because it looks a interesting option to use this in combination with Seagate's 8TB SMR drives. From what i've read SMR drives are not suitable for RAID/ZFS/BTRFS file systems.
    The reply is currently minimized Show
  • Accepted Answer

    Friday, March 25 2016, 12:55 PM - #Permalink
    Resolved
    0 votes
    ***Important please first try this in a virtual machine***


    Okay. let's install SnapRAID 11.0 on ClearOS Community 7.x.

    Update your system

    yum update


    Install gcc

    yum --enablerepo=* install gcc


    Make directory

    mkdir /var/lib/snapraid


    Change permissions

    chmod 755 /var/lib/snapraid


    Change directory

    cd /var/lib/snapraid


    Download SnapRAID.

    wget https://github.com/amadvance/snapraid/releases/download/v11.0/snapraid-11.0.tar.gz


    Unpack SnapRAID:

    cd tmp
    tar xvzf snapraid-11.0.tar.gz


    Change directory:

    cd snapraid-11.0


    configure / make / make check / make install SnapRAID:

    ./configure



    make



    make check



    make install


    Check if SnapRAID is installed:


    snapraid status


    Output:

    [root@localhost snapraid]# snapraid status
    Self test...
    No configuration file found at '/etc/snapraid.conf'


    Remove download package

    rm /var/lib/snapraid/snapraid-11.0.tar.gz


    So here you go. SnapRaid is installed! :)


    You can find the manual by typing:

    man snapraid





    03/25/2016 v0.1
    03/26/2016 v0.2
    01/02/2017 v0.3
    The reply is currently minimized Show
  • Accepted Answer

    Monday, January 02 2017, 09:22 AM - #Permalink
    Resolved
    0 votes
    Morning guys,

    I have tried to install SnapRAID 11.0 in a virtual machine and it installed successfully. I'm not sure what went wrong last week when trying to compile SnapRAID... I have rewrite the install guide. Please if there are some errors in the install guide let me know then I will change it.

    @Nick I will also check the repos and adjust the how-to that it also can be installed from the repos.
    The reply is currently minimized Show
  • Accepted Answer

    Friday, March 25 2016, 02:03 PM - #Permalink
    Resolved
    0 votes
    It seems to work fine in my virtual machine. I've set-up a parity drive with 1 data disk. I've sync everything and "diskp" (parity drive) holds all parity data. This is really cool solution with cheap 8TB SMR drives from Seagate's.


    snapraid sync
    Self test...
    Loading state from /var/snapraid/snapraid.content...
    WARNING! Content file '/var/snapraid/snapraid.content' not found, trying with another copy...
    Loading state from /var/flexshare/shares/disk01/snapraid.contet...
    No content file found. Assuming empty.
    Scanning disk d1...
    Using 0 MiB of memory for the FileSystem.
    Initializing...
    Saving state to /var/snapraid/snapraid.content...
    Saving state to /var/flexshare/shares/disk01/snapraid.contet...
    Verifying /var/snapraid/snapraid.content...
    Verifying /var/flexshare/shares/disk01/snapraid.contet...
    Syncing...
    Using 16 MiB of memory for 32 blocks of IO cache.
    100% completed, 4699 MB processed:00 ETA

    d1 3% | **
    parity 89% | ******************************************************
    raid 2% | *
    hash 4% | **
    sched 0% |
    misc 0% |
    |______________________________________________________________
    wait time (total, less is better)

    Everything OK
    Saving state to /var/snapraid/snapraid.content...
    Saving state to /var/flexshare/shares/disk01/snapraid.contet...
    Verifying /var/snapraid/snapraid.content...
    Verifying /var/flexshare/shares/disk01/snapraid.contet...




    snapraid status
    Self test...
    Loading state from /var/snapraid/snapraid.content...
    Using 0 MiB of memory for the FileSystem.
    SnapRAID status report:

    Files Fragmented Excess Wasted Used Free Use Name
    Files Fragments GB GB GB
    3 0 0 0.0 4 3 54% d1
    --------------------------------------------------------------------------
    3 0 0 0.0 4 3 54%


    100%|o
    |o
    |o
    |o
    |o
    |o
    |o
    50%|o
    |o
    |o
    |o
    |o
    |o
    |o
    0%|o_____________________________________________________________________
    0 days ago of the last scrub/sync 0

    The oldest block was scrubbed 0 days ago, the median 0, the newest 0.

    No sync is in progress.
    The 100% of the array is not scrubbed.
    You have 1 files with zero sub-second timestamp.
    Run the 'touch' command to set it to a not zero value.
    No rehash is in progress or needed.
    No error detected.



    The only thing how do we let this work with Flexshares? Mount every data disk to a flexshare? Not sure yet...


    /dev/sdb --> var/flexshare/shares/disk01
    /dev/sdc --> var/flexshare/shares/disk02
    The reply is currently minimized Show
  • Accepted Answer

    Saturday, March 26 2016, 11:33 AM - #Permalink
    Resolved
    0 votes
    The reply is currently minimized Show
  • Accepted Answer

    Thursday, July 07 2016, 06:20 PM - #Permalink
    Resolved
    0 votes
    I think I found a interesting option to solve the problem to expand directories over serval drives.

    Mergerfs


    mergerfs is a union filesystem geared towards simplifing storage and management of files across numerous commodity storage devices. It is similar to mhddfs, unionfs, and auks.



    Features

    Runs in userspace (FUSE)
    Configurable behaviors
    Support for extended attributes (xattrs)
    Support for file attributes (chattr)
    Runtime configurable (via xattrs)
    Safe to run as root
    Opportunistic credential caching
    Works with heterogeneous filesystem types
    Handling of writes to full drives
    Handles pool of readonly and read/write drives


    and it seems to work on ClearOS but to be certain I have to do some testing.
    The reply is currently minimized Show
  • Accepted Answer

    Friday, December 30 2016, 10:06 AM - #Permalink
    Resolved
    0 votes
    This guide is not working at the moment cause SnapRAID needs version 1.14 of make and the repositories holds version


    [root@enterprise snapraid-11.0]# rpm -qv automake
    automake-1.13.4-3.el7.noarch


    So you can not build SnapRAID.
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, January 01 2017, 04:32 PM - #Permalink
    Resolved
    0 votes
    I was quite literally getting the things together I needed to build/upgrade a Community box to 7.x and decided to pop into the forums only to see this.

    Is it safe to assume this applies to the latest 7.x iso? Also maybe I'm misunderstanding something here, but in the many years I've used SnapRAID I've never built it with automake installed. In fact I just built SnapRAID 11 on a 5.x box a week or so ago that didn't have automake installed.

    Thanks for the insight.

    Marcel van Leeuwen wrote:
    This guide is not working at the moment cause SnapRAID needs version 1.14 of make and the repositories holds version


    [root@enterprise snapraid-11.0]# rpm -qv automake
    automake-1.13.4-3.el7.noarch


    So you can not build SnapRAID.
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, January 01 2017, 05:03 PM - #Permalink
    Resolved
    0 votes
    Is there some confusion between between make and automake? Make is v3.82 in ClearOS 7.2.

    Also is there any need to compile it? An rpm is available in clearos-epel (10.0-1.el7).
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, January 01 2017, 06:45 PM - #Permalink
    Resolved
    0 votes
    Hi,

    Tomorrow I will take another look...

    Nick, thanks for the heads up!
    The reply is currently minimized Show
Your Reply