Forums

Resolved
0 votes
I am seeing this in the system log every time the notification Cron job runs:

engine: exception: error: /usr/clearos/apps/base/libraries/Shell.php (207): #015 #015Error: /dev/md1: unrecognised disk label
engine: exception: debug backtrace: /usr/clearos/apps/storage/libraries/Storage_Device.php (461): execute
engine: exception: debug backtrace: /usr/clearos/apps/storage/libraries/Storage_Device.php (820): get_partition_info
engine: exception: debug backtrace: /usr/clearos/apps/storage/libraries/Storage_Device.php (346): _scan
engine: exception: debug backtrace: /usr/clearos/apps/raid/libraries/Raid.php (737): get_devices
engine: exception: debug backtrace: /usr/clearos/apps/raid/controllers/software.php (220): get_arrays
engine: exception: debug backtrace: GUI (0): get_state
engine: exception: debug backtrace: /usr/clearos/framework/system/core/CodeIgniter.php (359): call_user_func_array
engine: exception: debug backtrace: /usr/clearos/framework/htdocs/app/index.php (222): require_once

Clears 6.6
/etc/cron.d/app-raid:
"0 5 * * * root /usr/sbin/raid-notification >/dev/NULL 2>&1"

The RAID1 is clean and are not seeing any issues to indicate otherwise, /dev/md0 and /dev/md1 are both valid.

The web GUI does not appear to completely update or disable the monitoring and notification options on save.
Friday, November 06 2015, 12:57 AM
Share this post:
Responses (37)
  • Accepted Answer

    philipz
    philipz
    Offline
    Thursday, May 19 2016, 08:39 PM - #Permalink
    Resolved
    0 votes
    SD card is connected to main board - chassis is HP microserver Gen8. For sure is not USB connected. Here is the output from hdparm:
    hdparm /dev/sda

    /dev/sda:
    SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    multcount = 0 (off)
    readonly = 0 (off)
    readahead = 256 (on)
    geometry = 60906/64/32, sectors = 124735488, start = 0

    And after this command in /var/log/messages NO usual message " ata_id[22337]: HDIO_GET_IDENTITY failed for '/dev/sda' "
    The reply is currently minimized Show
  • Accepted Answer

    Thursday, May 19 2016, 04:07 PM - #Permalink
    Resolved
    0 votes
    Is your SD card USB connected? I can create that message by simply plugging in a USB memory stick

    May 20 01:34:09 danda kernel: usb 2-1.6: new high speed USB device number 7 using ehci_hcd
    May 20 01:34:09 danda kernel: usb 2-1.6: New USB device found, idVendor=0781, idProduct=5575
    May 20 01:34:09 danda kernel: usb 2-1.6: New USB device strings: Mfr=1, Product=2, SerialNumber=3
    May 20 01:34:09 danda kernel: usb 2-1.6: Product: Cruzer Glide
    May 20 01:34:09 danda kernel: usb 2-1.6: Manufacturer: SanDisk
    May 20 01:34:09 danda kernel: usb 2-1.6: SerialNumber: 20044528411B84924A11
    May 20 01:34:09 danda kernel: usb 2-1.6: configuration #1 chosen from 1 choice
    May 20 01:34:09 danda kernel: scsi12 : SCSI emulation for USB Mass Storage devices
    May 20 01:34:10 danda kernel: scsi 12:0:0:0: Direct-Access SanDisk Cruzer Glide 2.01 PQ: 0 ANSI: 6
    May 20 01:34:10 danda kernel: sd 12:0:0:0: Attached scsi generic sg5 type 0
    May 20 01:34:10 danda kernel: sd 12:0:0:0: [sde] 31266816 512-byte logical blocks: (16.0 GB/14.9 GiB)
    May 20 01:34:10 danda kernel: sd 12:0:0:0: [sde] Write Protect is off
    May 20 01:34:10 danda kernel: sd 12:0:0:0: [sde] Assuming drive cache: write through
    May 20 01:34:10 danda kernel: sd 12:0:0:0: [sde] Assuming drive cache: write through
    May 20 01:34:10 danda kernel: sde: sde1
    May 20 01:34:10 danda kernel: sd 12:0:0:0: [sde] Assuming drive cache: write through
    May 20 01:34:10 danda kernel: sd 12:0:0:0: [sde] Attached SCSI disk
    May 20 01:34:10 danda ata_id[25317]: HDIO_GET_IDENTITY failed for '/dev/sde'

    However, I cannot trigger any more "HDIO_GET_IDENTITY failed" error messages when opening or performing any actions on the "Software RAID Manager" webconfig page with it plugged in.
    The only source of the "HDIO_GET_IDENTITY failed" message I could find was in hdparm; hdparm is required by pm-utils which in turn is required by hal. Hal is responsible for discovering, enumerating and mediating access to most hardware - so is that the link? As "ata_id" is used to provide some information about the ATA device, which USB isn't, it would fail. See the headers in /usr/src/kernels/2.6.32-573.1.1.v6.x86_64/include/linux/hdreg.h: line 393

    * Structure returned by HDIO_GET_IDENTITY, as per ANSI NCITS ATA6 rev.1b spec.
    The reply is currently minimized Show
  • Accepted Answer

    Thursday, May 19 2016, 01:36 PM - #Permalink
    Resolved
    0 votes
    Are you sure that can be attributed back to RAID or the Storage Manager app? My GoogleFu seems to indicate that those errors are related to the utility 'hdparm', and I can't see anywhere in the API where we use that.

    B
    The reply is currently minimized Show
  • Accepted Answer

    philipz
    philipz
    Offline
    Wednesday, May 18 2016, 02:33 PM - #Permalink
    Resolved
    0 votes
    Hi Ben,
    unfortunately even after restart of webconfig still have noisy reply in /var/log/messages
    May 18 17:31:05 ata_id[13546]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 18 17:31:07 ata_id[14076]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 18 17:31:09 ata_id[14594]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 18 17:31:12 ata_id[15121]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 18 17:31:14 ata_id[15635]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 18 17:31:16 ata_id[16150]: HDIO_GET_IDENTITY failed for '/dev/sda'
    The reply is currently minimized Show
  • Accepted Answer

    philipz
    philipz
    Offline
    Wednesday, May 18 2016, 02:25 PM - #Permalink
    Resolved
    0 votes
    Hi Ben,

    I made suggested changes and when on CLI enter raid-notification -f receive only one line with error in messages, but when open page "Software RAID Manager" every 2 seconds until change page receive "noise" in /var/log/messages .
    Do I need to restart some process or result of changes of Storage_Device.php apply immediately? Ask you because without adding 'q' when make test from CLI have only one line with error in /var/log/messages - my opinion is that function is called many times during page https://clearos:81/app/raid is current in web browser.
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, May 17 2016, 12:45 PM - #Permalink
    Resolved
    0 votes
    @phillipz

    In your case, those are low-level system/kernel messages, not something from the ClearOS API, and something related to the Storage_Device class, not RAID. I won't file a bug (yet), but I would be curious to know if you edited the file:

    /usr/clearos/apps/storage/libraries/Storage_Device.php

    And made this small change:


    public function get_partition_info($device)
    {
    clearos_profile(__METHOD__, __LINE__);

    // Load information from sfdisk if no partitions
    //----------------------------------------------

    $options['validate_exit_code'] = FALSE;
    $options['env'] = 'LANG=en_US';

    $shell = new Shell();
    $retval = $shell->execute(self::COMMAND_SFDISK, '-d ' . $device, TRUE, $options);
    $lines = $shell->get_output();


    To:


    public function get_partition_info($device)
    {
    clearos_profile(__METHOD__, __LINE__);

    // Load information from sfdisk if no partitions
    //----------------------------------------------

    $options['validate_exit_code'] = FALSE;
    $options['env'] = 'LANG=en_US';

    $shell = new Shell();
    $retval = $shell->execute(self::COMMAND_SFDISK, '-dq ' . $device, TRUE, $options);
    $lines = $shell->get_output();


    All we're doing is adding the '-q' flag to suppress warnings on the sfdisk command.

    If you make that change and then run:


    raid-notification -f


    Do you see any warnings surface? Are you messages log still noisy?

    B.
    The reply is currently minimized Show
  • Accepted Answer

    philipz
    philipz
    Offline
    Tuesday, May 17 2016, 10:36 AM - #Permalink
    Resolved
    0 votes
    Unfortunately, still has minor bug - /dev/sda is SD Card. This coming from /var/log/messages:
    May 17 13:18:36 ata_id[1169]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 17 13:18:38 ata_id[1701]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 17 13:18:40 ata_id[2215]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 17 13:18:42 ata_id[2746]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 17 13:18:44 ata_id[3342]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 17 13:18:47 ata_id[3886]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 17 13:18:49 ata_id[4441]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 17 13:18:51 ata_id[4992]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 17 13:18:53 ata_id[5506]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 17 13:18:55 ata_id[6085]: HDIO_GET_IDENTITY failed for '/dev/sda'
    May 17 13:18:57 ata_id[6595]: HDIO_GET_IDENTITY failed for '/dev/sda'


    And this is from CLI
    fdisk -l /dev/sda

    Disk /dev/sda: 63.9 GB, 63864569856 bytes
    64 heads, 32 sectors/track, 60906 cylinders
    Units = cylinders of 2048 * 512 = 1048576 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000d88e7

    Device Boot Start End Blocks Id System
    /dev/sda1 * 2 201 204800 83 Linux
    Partition 1 does not end on cylinder boundary.
    /dev/sda2 202 35201 35840000 83 Linux
    Partition 2 does not end on cylinder boundary.
    /dev/sda3 35202 39297 4194304 82 Linux swap / Solaris
    Partition 3 does not end on cylinder boundary.
    /dev/sda4 39298 60906 22127616 5 Extended
    Partition 4 does not end on cylinder boundary.
    /dev/sda5 39300 51299 12288000 83 Linux
    /dev/sda6 51301 60906 9836544 83 Linux


    of coarse version is last
     rpm -q app-raid
    app-raid-1.6.9-1.v6.noarch
    The reply is currently minimized Show
  • Accepted Answer

    Saturday, May 14 2016, 02:18 AM - #Permalink
    Resolved
    0 votes
    Terrific - thanks Ben

    [root@danda ~]# /usr/sbin/raid-notification -f
    Array Size Mount Level Status
    ------------------------------------------------------------------
    /dev/md2 10239MB /boinc RAID-1 Clean
    /dev/md10 1802261MB /work RAID-5 Clean
    /dev/md3 40959MB /alex_old/var RAID-1 Clean
    /dev/md4 383MB RAID-1 Clean
    /dev/md0 1023MB RAID-1 Clean
    [root@danda ~]# rpm -q app-raid
    app-raid-1.6.9-1.v6.noarch
    [root@danda ~]#
    The reply is currently minimized Show
  • Accepted Answer

    Thursday, May 12 2016, 02:02 PM - #Permalink
    Resolved
    0 votes
    Got it...and know why I don't see it...

    This method picks up (as it should) the DVD drive....which has no partitioning, of course. Easy fix to check for that so that PHP does throw up that ugly warning.

    B.
    The reply is currently minimized Show
  • Accepted Answer

    Thursday, May 12 2016, 01:59 PM - #Permalink
    Resolved
    0 votes
    HI Ben - email sent
    The reply is currently minimized Show
  • Accepted Answer

    Thursday, May 12 2016, 01:38 PM - #Permalink
    Resolved
    0 votes
    @Tony - sure...I'll keep an eye on it if you send it to developer at clearfoundation dot com.

    B.
    The reply is currently minimized Show
  • Accepted Answer

    Thursday, May 12 2016, 01:32 AM - #Permalink
    Resolved
    0 votes
    Ben

    The output is 516 lines... probably a bit to large to post here?
    Better to send as an attachment to an email address? or?
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, May 11 2016, 11:14 PM - #Permalink
    Resolved
    0 votes
    Tony,

    It doesn't affect the app, but it's not great coding.

    If you save the following code snippet to /tmp/test.php and run as:

    php /tmp/test.php

    And post back the output, it will shed some light.


    <?php

    $bootstrap = getenv('CLEAROS_BOOTSTRAP') ? getenv('CLEAROS_BOOTSTRAP') : '/usr/clearos/framework/shared';
    require_once $bootstrap . '/bootstrap.php';
    use \clearos\apps\storage\Storage_Device as Storage_Device;
    clearos_load_library('storage/Storage_Device');

    $storage = new Storage_Device();
    print_r($storage->get_devices());


    B.
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, May 11 2016, 09:47 PM - #Permalink
    Resolved
    0 votes
    Are these php warnings expected?

    [root@danda ~]# /usr/sbin/raid-notification -f
    PHP notice: Undefined index: partitions - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP warning: array_keys() expects parameter 1 to be array, null given - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP notice: Undefined index: partitions - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP warning: array_keys() expects parameter 1 to be array, null given - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP notice: Undefined index: partitions - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP warning: array_keys() expects parameter 1 to be array, null given - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP notice: Undefined index: partitions - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP warning: array_keys() expects parameter 1 to be array, null given - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP notice: Undefined index: partitions - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP warning: array_keys() expects parameter 1 to be array, null given - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP notice: Undefined index: partitions - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP warning: array_keys() expects parameter 1 to be array, null given - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP notice: Undefined index: partitions - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP warning: array_keys() expects parameter 1 to be array, null given - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP notice: Undefined index: partitions - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP warning: array_keys() expects parameter 1 to be array, null given - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP notice: Undefined index: partitions - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP warning: array_keys() expects parameter 1 to be array, null given - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP notice: Undefined index: partitions - /usr/clearos/apps/raid/libraries/Raid.php (749)
    PHP warning: array_keys() expects parameter 1 to be array, null given - /usr/clearos/apps/raid/libraries/Raid.php (749)
    Array Size Mount Level Status
    ------------------------------------------------------------------
    /dev/md2 10239MB /boinc RAID-1 Clean
    /dev/md10 1802261MB /work RAID-5 Clean
    /dev/md3 40959MB /alex_old/var RAID-1 Clean
    /dev/md4 383MB RAID-1 Clean
    /dev/md0 1023MB RAID-1 Clean
    [root@danda ~]#


    [root@danda ~]# cat /proc/mdstat
    Personalities : [raid1] [raid6] [raid5] [raid4]
    md2 : active raid1 sdd2[1] sdc2[0]
    10485696 blocks [2/2] [UU]
    bitmap: 5/160 pages [20KB], 32KB chunk

    md10 : active raid5 sdd6[3] sdb8[2] sdc6[1]
    1845515264 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    bitmap: 0/7 pages [0KB], 65536KB chunk

    md3 : active raid1 sdd5[1] sdc5[0]
    41942976 blocks [2/2] [UU]
    bitmap: 0/160 pages [0KB], 128KB chunk

    md4 : active raid1 sdd1[1] sdc1[0]
    393152 blocks [2/2] [UU]
    bitmap: 0/1 pages [0KB], 65536KB chunk

    md0 : active raid1 sdc3[0] sdd3[1]
    1048512 blocks [2/2] [UU]
    bitmap: 0/1 pages [0KB], 65536KB chunk

    unused devices: <none>
    [root@danda ~]#
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, May 11 2016, 01:38 AM - #Permalink
    Resolved
    0 votes
    Good to hear....thx for watching out for updates.

    As for the 404's...yup...someone changed one or more Apache redirects or put in place a new one that directs to the store that takes precedence over the old ones. The website guys have been notified...hopefully they find a resolution quickly.

    Strangely, version 7 redirects were not affected.

    B.
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, May 11 2016, 12:31 AM - #Permalink
    Resolved
    0 votes
    Thanks Ben...

    Installed app-raid- version 1.6.8-1.v6.noarch and now get an email when performing a test while "disabled".

    Now - if only we could see those updated help pages in the User Guide instead of a miserable 404...
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, May 10 2016, 12:35 PM - #Permalink
    Resolved
    0 votes
    Thanks Ben...
    Clicking on "User Guide" all I got was :-

    Page Not Found
    We're sorry, but the page you requested could not be found.

    This is true for every single Webconfig page tried so far with Version 6.7.
    Something is badly broken...
    A few URLs to illustrate the point... I have reported this before...

    https://store.clearcenter.com/redirect/ClearOS/6.2.0/userguide/raid
    https://store.clearcenter.com/redirect/ClearOS/6.2.0/userguide/date
    https://store.clearcenter.com/redirect/ClearOS/6.2.0/userguide/raid
    https://store.clearcenter.com/redirect/ClearOS/6.2.0/userguide/accounts

    So Ben not only an update to the doc - but some-body needs to get access to the User Guide working, otherwise we will still be mis-understanding as we blunder about taking educated guesses what some of the options on various Webconfig pages really do :-) Not good...
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, May 10 2016, 10:57 AM - #Permalink
    Resolved
    0 votes
    Hi Tony,

    #1 is a bug...easy fix.

    #2 is a mis-understanding. Setting frequency to 1 minute will check your RAID every minute...however, an email will only be sent out if something has changed since the last time it checked. Setting to "Failure only" means you won't receive an email every minute as a raid rebuilds (as the %sync changes constantly)...you will just receive an alert once when a RAID array actually fails. Setting to "Always", means you will receive an alert anytime there is a status change in the array summary. I can update the docs with this info.

    B.
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, May 10 2016, 05:08 AM - #Permalink
    Resolved
    0 votes
    Thanks Ben.

    Now works as expected - well not quite...

    1). If you have everything setup with Monitor RAID "disabled" and press "Send Test Email" it appears to send an email and in my case comes back with "Success. Email notification has been sent to: admin@sraellis.no-ip.com." Unfortunately no email is ever received :-(

    Now if you press the button with Monitor RAID "enabled" - exactly the same happens except an email is received :-)

    So... surely when it is disabled either it should send the email properly anyway, or come back with a message indicating the app must be enabled for the test, or similar.

    2). Frequency doesn't work.
    Setting it to every minute as a test. Nothing received - no emails whatsoever... (yes - it was "enabled" and "always").
    Pressing the test button still works... report below

    RAID Status:
    ============

    Date: May 10 2016 15:05:18 AEST
    Status: Clean

    Array Size Mount Level Status
    ------------------------------------------------------------------
    /dev/md2 10239MB /boinc RAID-1 Clean
    /dev/md10 1802261MB /work RAID-5 Clean
    /dev/md3 40959MB /alex_old/var RAID-1 Clean
    /dev/md4 383MB RAID-1 Clean
    /dev/md0 1023MB RAID-1 Clean

    and the system

    [root@danda ~]# uname -r
    2.6.32-573.1.1.v6.x86_64
    [root@danda ~]# rpm -qa | grep raid
    app-raid-1.6.7-1.v6.noarch
    app-raid-core-1.6.7-1.v6.noarch
    [root@danda ~]# cat /etc/clearos-release
    ClearOS Community release 6.7.0 (Final)
    [root@danda ~]#
    The reply is currently minimized Show
  • Accepted Answer

    Monday, May 09 2016, 08:43 PM - #Permalink
    Resolved
    0 votes
    I back-ported the changes to v6 repos today....mirrors may still be syncing, but over the next 12 hours, you should be able to run (on ClearOS 6 Community or Pro):

    yum --enablerepo=clearos-test upgrade app-raid


    B.
    The reply is currently minimized Show
  • Accepted Answer

    Friday, May 06 2016, 02:18 PM - #Permalink
    Resolved
    0 votes
    The original post was for Version 6 :-) so that is what I am using... (in fact my Version 7.2 doesn't have any raid arrays).

    Clears 6.6
    /etc/cron.d/app-raid:
    "0 5 * * * root /usr/sbin/raid-notification >/dev/NULL 2>&1"

    EDIT

    @Tony - when you run your 'yum clean all', you need the '--enablerepo=clearos-updates-testing' parameter.

    I should have mentioned that I had tried the commands you provided just before my "list" command. But since the Version 6.x app wasn't updated, it's immaterial anyway... my poor communication.
    The reply is currently minimized Show
  • Accepted Answer

    Friday, May 06 2016, 02:04 PM - #Permalink
    Resolved
    0 votes
    The reply is currently minimized Show
  • Accepted Answer

    Friday, May 06 2016, 02:03 PM - #Permalink
    Resolved
    0 votes
    BTW...I only updated RAID on the ClearOS 7 branch...I didn't back port anything to version 6...maybe that's why we're all cross-posting.

    B.
    The reply is currently minimized Show
  • Accepted Answer

    Friday, May 06 2016, 02:01 PM - #Permalink
    Resolved
    0 votes
    @Tony - when you run your 'yum clean all', you need the '--enablerepo=clearos-updates-testing' parameter.

    B
    The reply is currently minimized Show
  • Accepted Answer

    Paul
    Paul
    Offline
    Friday, May 06 2016, 01:53 PM - #Permalink
    Resolved
    0 votes
    @Ben

    I did run the upgrade command and it is shown in the code snip. Anyhow I suspect that maybe the mirror (ftp.nluug.nl) is not up to date yet. I will try again later.
    The reply is currently minimized Show
  • Accepted Answer

    Friday, May 06 2016, 01:35 PM - #Permalink
    Resolved
    0 votes
    Thanks for looking at his Ben. I cannot see the update so far...

    [root@danda ~]# yum clean all && yum list app-raid --enablerepo=* --showduplicates
    Loaded plugins: clearcenter-marketplace, fastestmirror, kabi
    Loading support for CentOS kernel ABI
    Cleaning repos: clearos clearos-addons clearos-centos clearos-centos-updates
    : clearos-extras clearos-fast-updates clearos-updates
    Cleaning up Everything
    Cleaning up list of fastest mirrors
    Loaded plugins: clearcenter-marketplace, fastestmirror, kabi
    Loading support for CentOS kernel ABI
    ClearCenter Marketplace: fetching repositories...
    Determining fastest mirrors
    * centos-centosplus-unverified: centos.mirror.crucial.com.au
    ..
    ...snipped
    ...
    * clearos-updates: mirror1-singapore.clearos.com
    * clearos-updates-testing: mirror1-singapore.clearos.com
    * contribs: download4.clearsdn.com
    * private-clearcenter-dyndns: download1.clearsdn.com:80
    clearos | 3.8 kB 00:00
    clearos/primary_db | 1.7 MB 00:02
    clearos-addons | 2.9 kB 00:00
    clearos-addons/primary_db | 43 kB 00:00
    clearos-centos | 3.7 kB 00:00
    clearos-centos/primary_db | 4.6 MB 00:03
    clearos-centos-updates | 2.9 kB 00:00
    clearos-centos-updates/primary_db | 5.1 MB 00:01
    clearos-extras | 2.9 kB 00:00
    clearos-extras/primary_db | 115 kB 00:00
    clearos-fast-updates | 2.9 kB 00:00
    clearos-fast-updates/primary_db | 2.5 kB 00:00
    clearos-updates | 2.9 kB 00:00
    clearos-updates/primary_db | 610 kB 00:01
    Installed Packages
    app-raid.noarch 1:1.6.6-1.v6 @clearos-updates
    Available Packages
    app-raid.noarch 1:1.1.7-1.v6 clearos
    app-raid.noarch 1:1.1.7-1.v6 clearos
    app-raid.noarch 1:1.6.6-1.v6 clearos-updates
    [root@danda ~]#

    An install just gave me the faulty version...
    The reply is currently minimized Show
  • Accepted Answer

    Friday, May 06 2016, 01:13 PM - #Permalink
    Resolved
    0 votes
    @Paul,

    I don't see in your code snippet where you actually ran the upgrade command:

    yum --enablerepo=clearos-updates-testing upgrade app-raid


    B
    The reply is currently minimized Show
  • Accepted Answer

    Paul
    Paul
    Offline
    Friday, May 06 2016, 07:54 AM - #Permalink
    Resolved
    0 votes
    Hi Ben thanks for looking at this for us. I have tried to update but nothing got installed.

    [root@fs1 log]# yum --enablerepo=clearos-updates-testing clean all
    Loaded plugins: clearcenter-marketplace, fastestmirror
    Cleaning repos: clearos clearos-addons clearos-centos clearos-centos-updates clearos-extras clearos-fast-updates clearos-updates
    : clearos-updates-testing zabbix zabbix-non-supported
    Cleaning up Everything
    Cleaning up list of fastest mirrors
    [root@fs1 log]# yum --enablerepo=clearos-updates-testing upgrade app-raid
    Loaded plugins: clearcenter-marketplace, fastestmirror
    Setting up Upgrade Process
    ClearCenter Marketplace: fetching repositories...
    Determining fastest mirrors
    * clearos: ftp.nluug.nl
    * clearos-addons: ftp.nluug.nl
    * clearos-centos: download2.clearsdn.com
    * clearos-centos-updates: download2.clearsdn.com
    * clearos-extras: ftp.nluug.nl
    * clearos-fast-updates: download2.clearsdn.com
    * clearos-updates: ftp.nluug.nl
    * clearos-updates-testing: ftp.nluug.nl
    * contribs: download2.clearsdn.com
    * private-clearcenter-backuppc: download4.clearsdn.com:80
    * private-clearcenter-dyndns: download2.clearsdn.com:80
    * private-clearcenter-plex: download4.clearsdn.com:80
    * private-clearcenter-smart-monitor: download4.clearsdn.com:80
    * private-clearcenter-zarafa-community: download4.clearsdn.com:80
    clearos | 3.8 kB 00:00
    clearos/primary_db | 1.7 MB 00:00
    clearos-addons | 2.9 kB 00:00
    clearos-addons/primary_db | 43 kB 00:00
    clearos-centos | 3.7 kB 00:00
    clearos-centos/primary_db | 4.6 MB 00:02
    clearos-centos-updates | 2.9 kB 00:00
    clearos-centos-updates/primary_db | 5.1 MB 00:01
    clearos-extras | 2.9 kB 00:00
    clearos-extras/primary_db | 115 kB 00:00
    clearos-fast-updates | 2.9 kB 00:00
    clearos-fast-updates/primary_db | 2.5 kB 00:00
    clearos-updates | 2.9 kB 00:00
    clearos-updates/primary_db | 610 kB 00:00
    clearos-updates-testing | 2.9 kB 00:00
    clearos-updates-testing/primary_db | 17 kB 00:00
    zabbix | 951 B 00:00
    zabbix/primary | 23 kB 00:00
    zabbix 145/145
    zabbix-non-supported | 951 B 00:00
    zabbix-non-supported/primary | 3.8 kB 00:00
    zabbix-non-supported 15/15
    No Packages marked for Update
    [root@fs1 log]#


    [root@fs1 log]# yum list installed|grep app-raid
    app-raid.noarch 1:1.6.6-1.v6 @clearos-updates
    app-raid-core.noarch 1:1.6.6-1.v6 @clearos-updates
    [root@fs1 log]#
    The reply is currently minimized Show
  • Accepted Answer

    Thursday, May 05 2016, 03:15 PM - #Permalink
    Resolved
    0 votes
    Hi guys,

    Gave this app some TLC this morning...confirmed and tracked this bug for the RAID0 issue, and this bug for the wonkiness around the settings page and the fact that notifications for "On fail" or "All" was completely broken.

    Sent up the changes to the build system just now as version 2.1.7. Should be available later today using:

    yum --enablerepo=clearos-updates-testing clean all
    yum --enablerepo=clearos-updates-testing upgrade app-raid


    If those on this thread can confirm these two issues are resolving for them, I'll go ahead and mark those two bugs resolved and we can promote this version to main repos next Tues.

    B.
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, May 04 2016, 11:45 PM - #Permalink
    Resolved
    0 votes
    Shaun Moberg wrote
    [quote]
    The web GUI does not appear to completely update or disable the monitoring and notification options on save.
    [unquote]
    I also saw wierdness with this app. As soon as I entered an email address it automatically set it to enabled. I did not want that. I wanted to send a test email first to see what the report looked like, then enable if it was useful. Every time I edited to disable, on save it was enabled again. I tried to disable and remove the address, then save. Will not let you save without an address. So I did the ultimate disable "yum remove app-raid". I should also mention that before entering the address, I pressed the test button. It behaved as if it had sent an email, with no warning or error that the email address was missing.
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, May 04 2016, 02:07 PM - #Permalink
    Resolved
    0 votes
    For reasons unknown to me, this command is failing on your system:

    parted -m /dev/md0 print


    I cannot reproduce on my server...here's my output:


    [~]# cat /proc/mdstat
    Personalities : [raid1]
    md126 : active raid1 sdc1[0] sdb1[1]
    512960 blocks super 1.0 [2/2] [UU]
    bitmap: 0/1 pages [0KB], 65536KB chunk

    md127 : active raid1 sdc2[0] sdb2[1]
    155070464 blocks super 1.2 [2/2] [UU]
    bitmap: 1/2 pages [4KB], 65536KB chunk

    unused devices: <none>
    [~]# parted -m /dev/md126 print
    BYT;
    /dev/md126:525MB:md:512:512:loop:Linux Software RAID Array:;
    1:0.00B:525MB:525MB:ext4::;
    [~]# parted -m /dev/md127 print
    BYT;
    /dev/md127:159GB:md:512:512:loop:Linux Software RAID Array:;
    1:0.00B:159GB:159GB:ext4::;


    I'll get Darryl to have a look at this thread...he may need some more information from you guys.

    B.
    The reply is currently minimized Show
  • Accepted Answer

    philipz
    philipz
    Offline
    Wednesday, May 04 2016, 10:30 AM - #Permalink
    Resolved
    0 votes
    I have more or less the same issue, but with different source. Some additional info - machine has 4 RAID massive, but md2 is RAID-0. Looking for error I found that RAID-0 not support redundancy, so no have file degraded. I can't found how to push "Software RAID Manager" to not check md2


    engine: exception: error: /usr/clearos/apps/base/libraries/Shell.php (207): /bin/cat: /sys/block/md2/md/degraded: No such file or directory
    engine: exception: debug backtrace: /usr/clearos/apps/raid/libraries/Raid.php (997): execute
    engine: exception: debug backtrace: /usr/clearos/apps/raid/libraries/Raid.php (747): _get_md_field
    engine: exception: debug backtrace: /usr/clearos/apps/raid/libraries/Raid.php (1079): get_arrays
    engine: exception: debug backtrace: /usr/clearos/apps/raid/libraries/Raid.php (488): _create_report
    engine: exception: debug backtrace: /usr/sbin/raid-notification (90): check_status_change
    engine: exception: error: /usr/clearos/apps/base/libraries/Shell.php (207): /bin/cat: /sys/block/md2/md/degraded: No such file or directory
    engine: exception: debug backtrace: /usr/clearos/apps/raid/libraries/Raid.php (997): execute
    engine: exception: debug backtrace: /usr/clearos/apps/raid/libraries/Raid.php (747): _get_md_field
    engine: exception: debug backtrace: /usr/clearos/apps/raid/controllers/software.php (76): get_arrays
    engine: exception: debug backtrace: /usr/clearos/framework/application/libraries/Page.php (601): index
    engine: exception: debug backtrace: /usr/clearos/framework/application/libraries/Page.php (445): view_controllers
    engine: exception: debug backtrace: /usr/clearos/apps/raid/controllers/raid.php (81): view_forms
    engine: exception: debug backtrace: GUI (0): index
    engine: exception: debug backtrace: /usr/clearos/framework/system/core/CodeIgniter.php (359): call_user_func_array
    engine: exception: debug backtrace: /usr/clearos/framework/htdocs/app/index.php (222): require_once
    engine: exception: error: /usr/clearos/apps/base/libraries/Shell.php (207): /bin/cat: /sys/block/md2/md/degraded: No such file or directory
    engine: exception: debug backtrace: /usr/clearos/apps/raid/libraries/Raid.php (997): execute
    engine: exception: debug backtrace: /usr/clearos/apps/raid/libraries/Raid.php (747): _get_md_field
    engine: exception: debug backtrace: /usr/clearos/apps/raid/controllers/software.php (220): get_arrays
    engine: exception: debug backtrace: GUI (0): get_state


     cat /proc/mdstat
    Personalities : [raid1] [raid6] [raid5] [raid4] [raid0]
    md2 : active raid0 sdb3[1] sdc3[2] sdd3[3] sda3[0]
    16760832 blocks super 1.2 512k chunks

    md1 : active raid1 sdb2[1] sdd2[3] sda2[0] sdc2[2]
    35807232 blocks super 1.2 [4/4] [UUUU]

    md3 : active raid5 sda4[0] sdb4[1] sdd4[4] sdc4[2]
    11599941120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
    bitmap: 0/29 pages [0KB], 65536KB chunk

    md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sda1[0]
    204736 blocks [4/4] [UUUU]

    unused devices: <none>


     parted -m /dev/md2 print
    BYT;
    /dev/md2:17.2GB:unknown:512:4096:loop:Unknown;
    1:0.00B:17.2GB:17.2GB:linux-swap(v1)::;
    The reply is currently minimized Show
  • Accepted Answer

    Paul
    Paul
    Offline
    Friday, November 13 2015, 10:02 AM - #Permalink
    Resolved
    0 votes
    Hi here are my results
    [root@fs1 ~]# df
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/sdd2 938131328 15784156 874686172 2% /
    tmpfs 4013252 4 4013248 1% /dev/shm
    /dev/sdd1 95054 31912 58022 36% /boot
    /dev/mapper/vg_fs1-LogVol00
    3845579784 1219457696 2430771152 34% /home
    [root@fs1 ~]#


     cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sda1[0] sdc1[3] sdb1[1]
    3907023872 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    bitmap: 0/15 pages [0KB], 65536KB chunk

    unused devices: <none>
    [root@fs1 ~]#


    [root@fs1 ~]# parted -m /dev/md0 print
    Error: /dev/md0: unrecognised disk label
    [root@fs1 ~]#


    and for completeness
    [root@fs1 ~]# mdadm --detail /dev/md0
    /dev/md0:
    Version : 1.1
    Creation Time : Sat Oct 13 01:11:40 2012
    Raid Level : raid5
    Array Size : 3907023872 (3726.03 GiB 4000.79 GB)
    Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
    Raid Devices : 3
    Total Devices : 3
    Persistence : Superblock is persistent

    Intent Bitmap : Internal

    Update Time : Fri Nov 13 07:20:38 2015
    State : clean
    Active Devices : 3
    Working Devices : 3
    Failed Devices : 0
    Spare Devices : 0

    Layout : left-symmetric
    Chunk Size : 512K

    Name : fs1:0 (local to host fs1)
    UUID : 6e4c4049:543cdbba:c44ac7fc:64710f98
    Events : 30390

    Number Major Minor RaidDevice State
    0 8 1 0 active sync /dev/sda1
    1 8 17 1 active sync /dev/sdb1
    3 8 33 2 active sync /dev/sdc1
    [root@fs1 ~]#
    The reply is currently minimized Show
  • Accepted Answer

    Thursday, November 12 2015, 09:26 PM - #Permalink
    Resolved
    0 votes
    cat /proc/mdstat:

    Personalities : [raid1]
    md0 : active raid1 sda1[0] sdb1[1]
    1048512 blocks super 1.0 [2/2] [UU]
    bitmap: 0/1 pages [0KB], 65536KB chunk

    md1 : active raid1 sda2[0] sdb2[1]
    243994432 blocks super 1.1 [2/2] [UU]
    bitmap: 1/2 pages [4KB], 65536KB chunk

    unused devices: <none>



    parted -m /dev/md0 print
    BYT;
    /dev/md0:1074MB:unknown:512:512:loop:Unknown;
    1:0.00B:1074MB:1074MB:ext4::;


    parted -m /dev/md1 print:
    Error: /dev/md1: unrecognised disk label


    I believe this is a known bug in the parted utility.
    The reply is currently minimized Show
  • Accepted Answer

    Thursday, November 12 2015, 06:21 PM - #Permalink
    Resolved
    0 votes
    I never saw this on my 6.x before I upgraded to 7...I'm currently without a dev environment with RAIDX...Can one of you guys post the results of:
    df
    cat /proc/mdstat


    and

    parted -m /dev/mdX print


    Where X is valid for your system (eg. 0, 1 etc.)

    Thx.

    B
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, November 11 2015, 07:05 PM - #Permalink
    Resolved
    0 votes
    I have added this issue to the bug tracker - https://tracker.clearos.com/view.php?id=6051
    The reply is currently minimized Show
  • Accepted Answer

    Paul
    Paul
    Offline
    Friday, November 06 2015, 08:47 AM - #Permalink
    Resolved
    0 votes
    I have the almost same error in my system log (md0 rather than md1) so your not alone. I wouldnt of known without seeing your post and checking my log. if you run "parted -l" to list your disk partitions you can see the error there too. I ran "mdadm --detail /dev/md0" and it shows me that my array is clean and in sync. My storage seems to be working fine so I will just keep an eye on it unless someone posts here it is a problem that needs fixing.
    The reply is currently minimized Show
Your Reply