For some installations, you may want to define a custom partition scheme instead of using the default. Typically, custom partitioning is required for:
Creating a separate /home or /data partition
If you decide to use the default partitioning method instead of the custom partition RAID method, the system will create a small partition for '/boot', it will create a swap partition to match the size of RAM you have allocated to the system, and will place the remaining space for the root filesystem '/'.
If you do not wish to use the default partitioning scheme on your system, then select Create Custom Layout in the installer's partitioning screen. The tool allows for creating software RAID, logical volumes, swap space, and regular partitions. The system is capable of creating ext2, ext3, ext4, swap, LVM, RAID, and vfat partition types. You should be familiar with disk partitioning concepts and Linux requirements when using this option|
When you launch the partition tool, you may get a message indicating that the partition is unreadable. This is normal for blank disk drives or when non-standard partition tables are used with your existing disk. Chose to create the partition table.
There are different RAID types to suit different deployment needs. The following is a brief description of RAID types used in ClearOS deployments.
RAID 0 - Striping
RAID 0 is typically used when speed is the only concern. This form or RAID is also called striping. All the data in this type of array is spread over all the disks and the server is able to write and read the data quickly because it can read from all disks in the array simultaneously. RAID 0 is often used for high performance application servers and database servers where the data does not need protection or is preserved in some other manner.
Fast Read/Write access
Can make very large volumes
Failure rate higher than single disk and failure rate increases with each additional drive
RAID 1 is often used as a way to protect the drives on which the operating system is running or as an entry-level solution for basic data protection. It is also called mirroring or duplexing (if the drives are on separate controllers). All data on the drive is mirrored from one partition to another. Data read occur from one drive only and data write operations are performed to both. RAID 1 is a well-rounded solution if basic redundancy is your goal. Here is a step-by-step guide to implement Software RAID 1 on regular IDE/SATA/SAS hard disks.
RAID 5 is typically used on volumes where redundancy is required and optimum capacity is needed. This form of RAID Partition is also called striping with parity and works by spreading the data across all the disks. Moreover a checksum is maintained and spread across all disks in such a manner that a single drive failure out of the whole array is tolerable. RAID 5 is typical for many storage servers.
RAID 6 is very similar to RAID 5 except that there are two drives allocated to parity instead of one. RAID 6 is more effective than RAID 5 with hot spare because the parity is maintained throughout instead of creation at the point of failure. This form or RAID Partition is typically used on volumes where extra redundancy is required and optimum capacity is needed. This form of RAID is similar to HP's ADG RAID and works by spreading the data across all the disks. Moreover a two checksums are maintained and spread across all disks in such a manner that the RAID can tolerate two disk failures. RAID 6 is typical for many storage servers.
You can validate your RAID system by looking at the status of the drive using the 'cat /proc/mdstat' command. You can also validate RAID1 and RAID 5 systems by removing a RAID member.
Gathering RAID Statistics
The Proc subsystem will show you up-to-date information about your RAID members. Issue the following command to see the status of your RAID:
Another way to validate redundancy is to manually create a 'failed' condition by physically disconnecting a RAID member. This is recommended for new systems as a way to validate this feature before production data is committed to the volume.
Power down the machine
Unplug the data connector from the drive (just unplugging the power is going to make the BIOS unhappy and the system will not be bootable)
Power up the machine
Check the data and the volume status
Power down the machine and re-attach the drive
Use 'watch cat /proc/mdstat' and monitor the volume in recover mode