Developers Documentation

×

Warning

301 error for file:https://clearos.com/dokuwiki2/lib/exe/css.php?t=dokuwiki&tseed=82873f9c9a1f5784b951644363f20ef8

User Tools

Site Tools


Storage Manipulation Using Bindmounts

In both ClearOS and ClearBOX you may need to manipulate the mount points in order to properly and efficiently use the space they way you want. The following is intended as both a guide for changing those mount points and data structures as well as providing a framework for a future app based on these considerations.

This document relies heavily on the architecture proposed and used in practice for ClearBOX with ClearOS version 5.

The Problem and the Solution

The problem that exists in Linux is that there are a variety of places that things can be. This can be difficult for storage planning because we are typically left with two options that are both less than ideal. The first option is to just make one big partition and be done with it. This is what was done by default with ClearOS 5.x with a standard install (not this way on ClearBOX). The problem/downside of this solution is that the potential exists for the either the users or the system to fill up the entire disk in which the operating system ALSO resides. This can cause huge problems. In ClearOS 5.x this symptom is most easily recognized by see Webconfig ask for authentication over and over with each click.

The other method is to partition the drive into various partitions and make a variety of mount points. This means that you can place things that grow on partitions other than the root system partition. The downside to this method is that if you cannot predict exactly how much will be needed on each partition (and who does?) then you will end up with wasted space.

Fortunately a middle ground exists where you can place all your growing data and user data on a separate partition and use bind mount points to divide up the same storage block into a variety of locations.

So here is what we recommend. You will want at least 3 physical partitions on your system:

  • /boot
  • LVM for system data called 'main'
  • LVM for user data called 'data'

It is a good idea to keep boot off on its own. This way it is a plain old partition which can be read easily by GRUB or other systems without needing to dissect the LVM aspects. With version 6 we recommend at least 500 Megabytes for your /boot partition

For the second partition we will set it up with LVM with the size of 51.2 Gigabytes. We will end up putting system data here.

Last we will throw everything else into a big LVM partition. LVM gives us incredible flexibility which is even further leveraged by our use of mount points later in this guide.

For our install we can just set the size to 20.48 Gigabytes. Why? LVM is super easy to grow and we will want to grow it instead of setting a size for it because we recommend that you reserve some data, just in case.

Here is how we will further divide the two LVM partitions labeled main and data:

main
  • swap
    • size: 2048 meg
  • /
    • size: 5120 meg
    • name: root
  • /var
    • size: 20480 meg
    • name: var
  • /var/log
    • size: 8192 meg
    • name: log
data
  • /store/data0
    • size: 10240
    • name: data0

Explanations

Back in the day it was recommended that you have double the RAM as you had memory. However, with larger memory pools and other considerations it is unlikely that your system will ever use more than 2 Gigabytes.

ClearOS doesn't need much space 5 gig is more than enough, as long as you set up the other structures. The only thing that should need to change here is the addition of apps. This is typically pretty small even for some of the most robust and complicated software that you can run on ClearOS. If you should need more later, never fear, we've left some room to grow. This is LVM after all!

The /var partition can use just a little space or vast amounts of space. much of our bind mount strategy will focus here. Outside of your various services, however, this should change more than the '/' (root) partition so we have more space. 20 gig should do it.

The log files can grow immensely if something is wrong. We want to keep this somewhat small on purpose, why? Because run away logs can crash the system. Set it to 8 and we can grow it a little later if you really need. But if you are exceeding 8 gig then it is likely that you need to address what is going wrong or collect your data in a better method.

The /store/data0 LVM partition is where the real magic happens. This is going to store all the real user data and here is where we will place data structures. Moreover, and this is really cool, we will use this same paradigm for any additional disks, SAN storage (iSCSI, et al), connected NAS storage, USB devices and other such data. We will explain this later.

LVM

LVM stands for Linux volume manager. It allows for partitions with great flexibility. Some of the things that LVM can do are:

  • Resize volume groups online by absorbing new physical volumes (PV) or ejecting existing ones.
  • Resize logical volumes (LV) online by concatenating extents onto them or truncating extents from them.
  • Create read-only snapshots of logical volumes (LVM1).
  • Create read-write snapshots of logical volumes (LVM2).
  • Stripe whole or parts of logical volumes across multiple PVs, in a fashion similar to RAID 0.
  • Mirror whole or parts of logical volumes, in a fashion similar to RAID 1.
  • Move online logical volumes between PVs.
  • Split or merge volume groups in situ (as long as no logical volumes span the split). This can be useful when migrating whole logical volumes to or from offline storage.

LVM will NOT:

  • Provide parity-based redundancy across LVs, as with RAID levels 3 through 6. This functionality is instead provided by the Linux multiple disk subsystem, which can be used as LVM physical volumes.

LVM Utilities

If you've configured these partitions when you installed the system then you will be able to see them and manipulate them. Here are some command that will be useful:

[root@clearos ~]# pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sdb2  main lvm2 a--   24.41g 2.53g
  /dev/sdb3  data lvm2 a--  207.98g 6.09g
[root@clearos ~]# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  data   1   1   0 wz--n- 207.98g 6.09g
  main   1   4   0 wz--n-  24.41g 2.53g
[root@clearos ~]# lvs
  LV    VG   Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
  data0 data -wi-ao-- 201.89g                                           
  logs  main -wi-ao--   4.88g                                           
  root  main -wi-ao--  10.00g                                           
  swap  main -wi-ao--   2.00g                                           
  var   main -wi-ao--   5.00g   

Mountpoints in fstab

After you have installed, your system's /etc/fstab might look something like this:

/dev/mapper/main-root   /                       ext4    defaults        1 1
UUID=5abcde29-abc9-abcd-abcd-1abcd19abcdf /boot                   ext4    defaults        1 2
/dev/mapper/data-data0  /store/data0            ext4    defaults        1 2
/dev/mapper/main-var    /var                    ext4    defaults        1 2
/dev/mapper/main-logs   /var/log                ext4    defaults        1 2
/dev/mapper/main-swap   swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

Adding volumes to ClearOS

To get started with any volume attached to the system we will stick to a couple standards. First, the volumes will need to conform to standards. For the volume which is immediate adjacent to the system partition we will always call data0. Other than this name, all other names will be can be selected at will.

You should assign the name for the logical volume in a consistent way to the directory mount point you create. For example:

lvcreate -l 1280 data -n data0

This command would have created the 'data0' logical volume in the volume group 'data'. We will keep this standard of naming the volume to be the same as the mount point for all volumes attaching whether they be NAS devices, iSCSI targets, USB drives with non-LVM partitions or whatever the case may be. It is up to the administrator to make sure that attaching devices do not overlap in name space.

Preparing a Volume

Once a drive is prepared we can add the entry to the /etc/fstab and attempt to mount it. The following is an example of data0's mount point entry in /etc/fstab.

/dev/mapper/data-data0  /store/data0            ext4    defaults        1 2

To mount this device run the following:

mount /store/data0

An inspection of this device will show (on a new volume) the lost+found directory only on this drive:

[root@cbox6 ~]# ls -la /store/data0/
total 28
drwxr-xr-x. 4 root root  4096 Jun  8 18:14 .
drwxr-xr-x. 5 root root  4096 Sep 20 13:24 ..
drwx------. 2 root root 16384 Jun  8 18:09 lost+found

Each drive, regardless of the mount point should have the same basic structure so that future ClearOS servers can utilize the data properly. Perform the following to create that structure.

mkdir /store/data0/live/
mkdir /store/data0/backup/
mkdir /store/data0/log/
mkdir /store/data0/sbin/

The name for all localhost data should be 'server1'. This convention will allow for exported volumes to be properly processed in the Central User Data paradigm. To designate a volume space as NON-exportable, create the following:

mkdir /store/data0/live/server1
mkdir /store/data0/backup/server1

Bind Mounts

Typical bind mount suggestions are:

/store/data0/live/server1/home                  /home                   none bind,rw 0 0
/store/data0/live/server1/root-support          /root/support           none bind,rw 0 0
/store/data0/live/server1/shares                /var/flexshare/shares   none bind,rw 0 0
/store/data0/live/server1/cyrus-imap            /var/spool/imap         none bind,rw 0 0
/store/data0/live/server1/kopano                /var/lib/kopano         none bind,rw 0 0
/store/data0/live/server1/zarafa                /var/lib/zarafa         none bind,rw 0 0
/store/data0/live/server1/system-mysql          /var/lib/system-mysql   none bind,rw 0 0
/store/data0/live/server1/mysql                 /var/lib/mysql          none bind,rw 0 0

Example: moving squid cache to bindmount

  • Identify the bindmount location and ensure that the new mount location exists. For example:
mount |grep data
/dev/mapper/data-data0 on /store/data0 type ext4 (rw)
/store/data0/live/server1/home on /home type none (rw,bind)
/store/data0/live/server1/root-support on /root/support type none (rw,bind)

-OR-

[root@system ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/main-root
                      9.9G  1.5G  8.0G  16% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/md1              117M   47M   64M  43% /boot
/dev/mapper/data-data0
                      644G  7.2G  604G   2% /store/data0
/dev/mapper/main-logs
                      4.9G  207M  4.4G   5% /var/log
Make the bindmount directory
mkdir /store/data0/live/server1/squid-cache
  • Next, make sure the properties of the source are the same as the bindmount.

Gather info about source

ls -la /var/spool/squid/|head -n2
total 3092
drwxr-x---   18 squid squid    4096 Dec  6 10:06 .

Run commands to match permissions

chown --reference /var/spool/squid /store/data0/live/server1/squid-cache
chmod --reference /var/spool/squid /store/data0/live/server1/squid-cache

Validate results

ls -la /store/data0/live/server1/squid-cache/ | head -n2
total 8
drwxr-x---  2 squid squid 4096 Dec  6 23:14 .
  • Next, stop any services that currently use the source directory
service squid stop
service dansguardian-av stop


If moving flexshares, if you are using the Web server, you will need to stop it as well as Windows Networking:

service httpd stop
service smb stop

  • Move the data to the new location.
yum -y install rsync
rsync -av --delete /var/spool/squid/* /store/data0/live/server1/squid-cache/.
  • For extra measure, run the sync again
rsync -av --delete /var/spool/squid/* /store/data0/live/server1/squid-cache/.
  • Validate that your information is at the new location:
ls /store/data0/live/server1/squid-cache/
00  01  02  03  04  05  06  07  08  09  0A  0B  0C  0D  0E  0F  swap.state  swap.state.clean
  • Remove the old data to clean up space
rm -rf /var/spool/squid/*
  • Create the bindmount in /etc/fstab by adding the following line at the end of that file (you can use vi or nano to edit the file):
/store/data0/live/server1/squid-cache           /var/spool/squid        none bind,rw 0 0
  • Mount the new location
mount /var/spool/squid
  • Validate that the mount point is there:
mount |grep '/var/spool/squid'
/store/data0/live/server1/squid-cache on /var/spool/squid type none (rw,bind)
  • Check you can see the information at both the new and old locations
ls /store/data0/live/server1/squid-cache/
00  01  02  03  04  05  06  07  08  09  0A  0B  0C  0D  0E  0F  swap.state  swap.state.clean
ls /var/spool/squid
00  01  02  03  04  05  06  07  08  09  0A  0B  0C  0D  0E  0F  swap.state  swap.state.clean


To double-check, you can add a file to one location and check that it appears in the other:

touch /store/data0/live/server1/squid-cache/test
ls /var/spool/squid
00  01  02  03  04  05  06  07  08  09  0A  0B  0C  0D  0E  0F  swap.state  swap.state.clean test
rm -f /var/spool/squid/test
ls /store/data0/live/server1/squid-cache
00  01  02  03  04  05  06  07  08  09  0A  0B  0C  0D  0E  0F  swap.state  swap.state.clean

  • Start the services that you stopped.
service squid start
service dansguardian-av start
content/en_us/kb_o_storage_manipulation_using_bindmounts.txt · Last modified: 2018/10/30 05:27 by nickh

https://clearos.com/dokuwiki2/lib/exe/indexer.php?id=content%3Aen_us%3Akb_o_storage_manipulation_using_bindmounts&1711642145