Forums

Resolved
0 votes
I was partitioning and formatting a drive. I used parted, and I came to the conclusion when partitioning the drive the type set is Microsoft basic primary. I'm not sure if I can change this with parted!? I now did this with fdisk with set partition system id, and then option 20 "Linux filesystem".

Command (m for help): p

Disk /dev/sdb: 10000.8 GB, 10000831348736 bytes, 19532873728 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 55FF48A7-0556-44D1-8664-307E6E8F58F5


# Start End Size Type Name
1 2048 19532871679 9.1T Microsoft basic primary
Thursday, August 23 2018, 01:37 PM
Share this post:
Responses (6)
  • Accepted Answer

    Friday, August 31 2018, 05:04 PM - #Permalink
    Resolved
    0 votes
    So, I ran into this last night and what I did was to simply ignore what parted had to say about it. When I attempted to put partitions down, I got warnings about overwriting signatures on the device that I had to confirm that yes, I want you to format the thing. It did this about 3 times on the disk.

    Out of curiousity, I went back to parted after this happened and sure enough, it was fixed. It seems like this is something then that parted reports on but does not configure.
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, August 26 2018, 09:24 AM - #Permalink
    Resolved
    0 votes
    When I check with fdisk -l:


    WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

    Disk /dev/sdc: 10000.8 GB, 10000831348736 bytes, 19532873728 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: gpt
    Disk identifier: BEE613E5-2B58-4673-B5B1-CB6D6D86F994


    # Start End Size Type Name
    1 2048 19532871679 9.1T Microsoft basic primary



    You still see the Microsoft basic type.


    With parted -l not:


    Model: ST10000V N0004-1ZD101 (scsi)
    Disk /dev/sdc: 10.0TB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:

    Number Start End Size File system Name Flags
    1 1049kB 10.0TB 10.0TB xfs primary
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, August 26 2018, 09:17 AM - #Permalink
    Resolved
    0 votes
    When using parted:


    (parted) mkpart primary ext2 0% 100%

    (parted) print
    Model: ST10000V N0004-1ZD101 (scsi)
    Disk /dev/sdc: 10.0TB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:

    Number Start End Size File system Name Flags
    1 1049kB 10.0TB 10.0TB ntfs primary


    You see the file system is still ntfs, but is shows the file system used before. When you format the drive to ext2 it shows the right file system.
    The reply is currently minimized Show
  • Accepted Answer

    Saturday, August 25 2018, 06:44 PM - #Permalink
    Resolved
    0 votes
    Dave, much, much appreciated! :)

    My reason to use parted was indeed it supports large HDD's and GPT but I couldn't find any way to change the "Type" when I used parted it was set to "Microsoft basic". So I think I used it the wrong way. I'm going to read and test what you posted. This Linux stuff is really cool this tinkering is what I missed when I had a Synology box for awhile.
    The reply is currently minimized Show
  • Accepted Answer

    Thursday, August 23 2018, 04:24 PM - #Permalink
    Resolved
    0 votes
    Oh...another cool thing about snapshots worth mentioning. On super large disks like this, you can check it for errors without taking it offline and keeping it offline for days while you repair it. If you noticed in the /etc/fstab entry I set the values to '0 0' for dump and pass (respectively). The second value says if the volume should be checked during startup. Imaging setting this value to 1 (or the dump value) and then having the system trigger a check disk and then the disk shows and error....wowzer...now you are stuck repairing the disk and not being able to bring it online for the WHOLE time it is being checked. With the snapshot, you can fsck the snapshot to determine if there is ever an error so that you can gracefully bring this data partition down on your own time schedule if there is a problem. Leaving it permanently 0 0 could spell disaster if problems are brewing but automatically checking on startup when you just patched the kernel only to find you have disk error may just ruin your day...if not your WEEK!!! while it rebuilds.
    The reply is currently minimized Show
  • Accepted Answer

    Thursday, August 23 2018, 04:17 PM - #Permalink
    Resolved
    0 votes
    The fdisk command is simply awful for provisioning large disks like the one you are using. Use 'parted' instead and set the disk to use a gpt label.

    In the past we could use the same fdisk command but fdisk has some problems manipulating large volumes and dealing with GPT partitions. So we will need to familiarize ourselves with the parted command. Parted will work through its own shell or by issuing full commands. Enter the shell with (where sdN is the partition you want to work with):

    parted -a optimal /dev/sdb


    The optimal part makes parted construct your partitions on good boundaries. Otherwise you can get complaints about your disk start and stopping on bad places.

    To list your partitions and drive information, run:

    print


    It is important to note that you can only have 4 primary partitions on a msdos style partition. If you need more than this, make your 4th partition an extended partition and create extended partitions under this partition. If your disk is blank, you need to make a label. The ‘msdos’ label is used for older disks. For newer disks, use ‘gpt’. Here is an article about the differences:

    https://www.maketecheasier.com/differences-between-mbr-and-gpt/

    If your disk already has a label, skip this step or in your case, change the label to gpt:

    mklabel gpt


    NOTE: At any point you can run the ‘print’ command to view the current status. Next you will want to create a partition (or partitions) in unused space on the drive. If you are using the whole drive for a single RAID partition, you can simply initialize the whole drive. Even though this drive is all by itself, you can actually make it into a RAID disk with one member. That way if you get a second disk, you can just join it to the RAID later and you will have RAID 1. There is even a migration path from here to RAID 5 if you get a 3rd disk...but you have to set it to RAID to begin with. We will use the ext2 for type even though we will make this RAID (we have to set it to something). For example:

    mkpart primary ext2 0% 100%
    set 1 raid on
    quit


    If you did the above part, skip this next paragraph

    If you are making a partition a specific size, you will need to set the start and stop points. For example to create two arrays starting after the two existing arrays at the 500 GiB partition barrier and extending to the 750GiB and a second one from 750 GiB to 1000 GiB:
    mkpart primary ext2 500GiB 750GiB
    mkpart primary ext2 750GiB 1000GiB
    set 3 raid on
    set 4 raid on
    quit


    If you don't set the RAID portion cause you don't want RAID, skip the RAID bits here:

    For more information about creating RAID devices from command line, visit:
    https://www.youtube.com/watch?v=JgJkfd8O-j8

    Configuring RAID from Command Line: Various RAID types are available. Some will work directly with the multi-disk array while others use a combination of multi-disk and LVM partitions. In the previous section we made two partitions of equal size on partitions 3 and 4 of the same disk. Naturally, you will not create arrays of redundancy across the same, single disk. To demonstrate the function only so in a real situation, simply use two disks instead. To create a mirror on these partitions, run:
    mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb3 /dev/sdb4


    If you want to make a mirror on one disk, do something like this:
    mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 missing


    There is no real performance issues running RAID1 in a degraded state...so why not?

    Viewing RAID Status: Managing the array requires that we should view the status of the array. We can view the status of our array with:

    cat /proc/mdstat


    If the status shows that it is in progress, you can use the ‘watch’ command to have the command re-issued every 2 seconds:
    watch cat /proc/mdstat




    The [UU] output is significant. This represents that there are 2 devices in the array and they are both up. If there were 5 devices and it was up, it would say [UUUUU]. If there were 5 devices and one was down, it would say [UUUU_].

    For even more detail, run:
    mdadm --detail /dev/md0

    Removing RAID Member: To remove a RAID member you will need to know the RAID name (/dev/md0 in our example) and the disk member (/dev/sdb2 in our example). You cannot remove a device that is in use. If necessary, dismount any volumes and stop the array. If the array is part of your running OS, reboot into rescue mode in order to rebuild the array.
    [code type="markup"]mdadm --manage /dev/md0 --fail /dev/sdb2
    cat /proc/mdstat
    mdadm --manage /dev/md0 -r /dev/sdb2
    cat /proc/mdstat


    Adding RAID Member(s)
    To add a RAID member you will need a partition of the same size or bigger. You will need to know the array and the device you will add to it.
    mdadm --manage /dev/md0 --add /dev/sdb2
    cat /proc/mdstat


    Removing All RAID Members and Devices
    When you remove a RAID, you also have to purge header information from the partition.
    mdadm --stop /dev/md0
    mdadm --remove /dev/md0
    mdadm --zero-superblock /dev/sdb1
    mdadm --zero-superblock /dev/sdb2
    cat /proc/mdstat


    Starting a Stopped Array
    Arrays that are stopped and need to be started again have their information stored and can be reassembled.
    mdadm --assemble --scan
    cat /proc/mdstat
    Backup RAID Parameters
    You can backup your RAID signatures to disk.
    mdadm --detail --scan > /etc/mdadm.conf


    Configuring LVM
    LVM devices are capable of wonderful things such as resizing and even snapshotting. Snapshots are great devices for backups! You can snapshot a volume and back it up instead of your main partition. This allows you to backup a point in time since backing up a volume in use means that it is in a state of constant change. For this exercise, we will create an LVM volume on the RAID array we created and also create a snapshot volume that can be used to backup our disk. Snapshot volumes only need to be as big as the changes that will occur during our backup. For example, if 1 GB or less of data is expected to change during the backup cycle, your volume only needs to be 1 GB in size. If you run out of space on the snapshot, the volume is dropped.

    Start by making a partition on our RAID that uses all of the disk except for the snapshot size, 1GB. On larger disks, you can reserve even more. Remember, it is trivial to grow a volume into empty space so there is little to lose by setting the volume size to 95% of its capacity.
    pvcreate /dev/md0


    Next, you can view your creation using:
    pvs


    Next, create a volume group for your data. For this, we will use the ‘data’ volume group:
    vgcreate data /dev/md0


    You can view your volume group with:
    vgs


    From here you can see how much space is available you can create the logical volume now while reserving some space for your snapshots, decide how big it will be. For this example, I’m making it 8GB in size and calling it ‘data0’:
    lvcreate -L 8G -n data0 data


    You can view the logical volume with:
    lvs


    The ‘vgs’ command will also show you the consumption of the logical volume on the volume group.

    Managing LVM
    Adding to your LVM: You can manage LVM with in many ways like adding more space, shrinking volumes and even making snapshots. To add more space, run:
    lvextend -L +500M /dev/mapper/data-data0
    lvs
    vgs


    Looking at Attributes with ‘lvs’: The ‘Attr’ section has flags for various things. here is their meaning.
    Volume type: (m)irrored, (M)irrored without initial sync, (o)rigin, (O)rigin with merging snapshot, (r)aid, (R)aid without initial sync, (s)napshot, merging (S)napshot, (p)vmove, (v)irtual, mirror or raid (i)mage, mirror or raid (I)mage out-of-sync, mirror (l)og device, under (c)onversion, thin (V)olume, (t)hin pool, (T)hin pool data, raid or thin pool m(e)tadata
    Permissions: (w)riteable, (r)ead-only, (R)ead-only activation of non-read-only volume
    Allocation policy: (a)nywhere, (c)ontiguous, (i)nherited, c(l)ing, (n)ormal This is capitalised if the volume is currently locked against allocation changes, for example during pvmove(8).
    fixed (m)inor
    State: (a)ctive, (s)uspended, (I)nvalid snapshot, invalid (S)uspended snapshot, snapshot (m)erge failed, suspended snapshot (M)erge failed, mapped (d)evice present without tables, mapped device present with (i)nactive table
    device (o)pen
    Target type: (m)irror, (r)aid, (s)napshot, (t)hin, (u)nknown, (v)irtual. This groups logical volumes related to the same kernel target together. So, for example, mirror images, mirror logs as well as mirrors themselves appear as (m) if they use the original device-mapper mirror kernel driver; whereas the raid equivalents using the md raid kernel driver all appear as (r). Snapshots using the original device-mapper driver appear as (s); whereas snapshots of thin volumes using the new thin provisioning driver appear as (t).
    Newly-allocated data blocks are overwritten with blocks of (z)eroes before use.
    (p)artial: One or more of the Physical Volumes this Logical Volume uses is missing from the system.

    Some typical outputs might be ‘-wi-a-----’ or ‘-wi-ao----’ which represents a:
    -wi-a-----
    writeable, inherited, active
    -wi-ao----
    writeable, inherited, active, open/mounted


    Activating and Deactivating an LVM: To toggle between an active LVM partition and inactive partition you can use the vgchange command. Dismount volumes before deactivating them (o in sixth attribute column). You can look at the status with the ‘lvs’ command. If it has an ‘a’ in the fifth column deactivate it.
    [code type="markup"]vgchange -an data


    To activate a volume:
    vgchange -ay data


    Reduce a volume size: You can reduce size of a logical volume. NOTE: YOU MUST RESIZE AND REDUCE THE PARTITION FIRST. If your partition is already reduced in size or if you don’t have a partition yet you can reduce it at will now, run:
    lvreduce --size -500M /dev/mapper/data-data0


    Creating a snapshot of a volume: You can snapshot a volume provided that you have free space in the volume group containing the target drive. Review this with ‘vgs’. Create a snapshot in the volume group of your target drive and call it after the name of your volume and a timestamp. For the date stamp, you can place the epoch time on the snapshot. Use this:
    datestamp=$(date +%s) && lvcreate --size 1G  --snapshot --name snap-data0-${datestamp} /dev/mapper/data-data0


    When the volume is created, it will tell you the name of the volume created. At this point, the snapshot volume can be mounted. We suggest mounting as read-only which is appropriate for backup. NOTE: ignore the ‘cow’ (Copy on Write) volume. It is the original snapshot that is preserved so that if you do mount the snapshot as read/write, it will preserve the original.
    mkdir -p /store/snap-data0
    mount -o ro,nouuid /dev/mapper/snap-data0-${datestamp} /store/snap-data0


    To remove the snapshot after your backup, run:
    lvremove /dev/mapper/data-snap--data0--${datestamp}


    To remove a regular logical volume, unmount the disk and run:
    lvremove /dev/mapper/data-data0


    Creating Partitions with EXT, XFS, BTRFS
    With a partition in place you can place a filesystem on the drive.
    EXT4:
    mkfs.ext4 /dev/mapper/data-data0

    XFS
    mkfs.xfs /dev/mapper/data-data0

    BTRFS
    mkfs.btrfs /dev/mapper/data-data0


    Reclaiming reserve: Mounting ext3/4 filesystems will commonly reserve a portion of the disk, if you wish to reclaim this space, run:
    tune2fs -m 0 /dev/mapper/data-data0

    Managing Partitions
    In ClearOS and other forms of Linux, the filesystem is structured as a large tree. You can look at this tree with the ‘findmnt’ command. Try it out:
    findmnt


    When you add additional volumes either as local disks or remotely mounted disks (NFS, CIFS, Gluster), you will snap these into place. With ClearOS, we recommend that you go further. Rather than snapping these volumes in all over the place, centralize where you place additional disk in the /store folder. Later, we will bind mount the volume to a different directory. Make sure you have a store folder:
    mkdir -p /store


    For each device you mount, local or remote, create a volume name. For example, ‘data0’, ‘data1’ and so forth are great for locally mounted disks. For remote systems, ‘buffalo1’, ‘wdcloud’, ‘iscsi1’, and other descriptive names are appropriate. Start by making a mount point:
    mkdir -p /store/data0


    You can test mounting your device. For example:
    mount /dev/mapper/data-data0 /store/data0


    If this mounts, add it to your /etc/fstab. It is better to use the UUID of a disk instead of its name or device type, although for LVMs, names are probably ok. Add this line:
    /dev/mapper/data-data0	/store/data0	ext4	defaults		0 0


    Now test mounting and unmounting:
    umount /store/data0
    mount /store/data0


    If you have trouble, check your syntax. Within this structure, make structures that you wish to snap in place to other objects of flexshares in the file system. A good layout that is amenable to live systems and backups is to make the structure on the mounted file system (where server1 is the hostname for your server):
    mkdir -p /store/data0/live/server1/system-mysql
    mkdir -p /store/data0/backup
    mkdir -p /store/snap-data0
    mkdir -p /store/data0/log
    mkdir -p /store/data0/sbin


    In the above example we create object on the volume itself. The use of bindmounts brings it all home and you can snap portions of this new volume in a variety of places in ClearOS.

    For more information on this process visit:
    https://www.clearos.com/resources/documentation/clearos/content:en_us:kb_o_storage_manipulation_using_bindmounts
    The reply is currently minimized Show
Your Reply