ZFS Filesystem backup
Hostname | xxxx |
---|---|
OS | centos6.2 64 |
10GB | Yes |
MTU | 9000 |
IP Address | xx.xx.xx.xx |
VLAN | backup & archive |
Location | xxxxx |
Storage Layout
Device | Size | Type | Compression | Comments |
---|---|---|---|---|
Raid6_sys | 120GB | Raid6 | No | Hardware Raid on Adaptec Raid Card - 1 hot spare |
Raid_data | 2.5TB | Raid6 | No | Hardware Raid on Adaptec Raid Card - 1 hot spare |
SATA | 1TB | EXT4 | No | Clone of System |
RaidZFS | XXTB | ZFS | Yes | Main Backup Storage |
Installation Notes
- Centos requires the Adaptec module to be installed during the install process in order to see the local raid
- The Mellonx 10G card needs a drive compiled and installed (make sure gcc, make, rpm_build_tools etc is installed!). The install files are in the homedir of root.
- BIOS is password locked with password
- Adaptec Raid Controller is password locked with password
- Full yum update performed - 21.11.2013
- Mellonx En driver doesn't load on it's own at start up. Fix this by adding the following to /etc/rc.local - modprobe mlx4_en
- Installed Megacli to talk to the LSI raid controller - rpm -ivh MegaCli-8.07.08-1.noarch.rpm
- Disable selinux - it's not supported with zfs at the moment.
LSI Controller Admin
We will use MegaCLI to communicate with the LSI controller
lsi.sh help
The controller is set to mail alerts. This is done via the root crontab
00 */2 * * * /usr/local/bin/lsi.sh checkNemail
View enclosures
/opt/MegaRAID/MegaCli/MegaCli64 -EncInfo -aALL
Creating the raid0 devices to present each disk to the OS
i=0; while [ $i -le 23 ] ; do /opt/MegaRAID/MegaCli/MegaCli64 -cfgldadd -r0[8:${i}] WB RA Cached CachedBadBBU -strpsz512 -a0 -NoLog ; i=`expr $i + 1`; done i=0; while [ $i -le 23 ] ; do /opt/MegaRAID/MegaCli/MegaCli64 -cfgldadd -r0[9:${i}] WB RA Cached CachedBadBBU -strpsz512 -a0 -NoLog ; i=`expr $i + 1`; done
Note that the enclosure ID's are 8 and 9. We can find this doing.
/opt/MegaRAID/MegaCli/MegaCli64 -EncInfo -aALL
Check the status of the drives
lsi.sh status lsi.sh drives
Information on a disk (in this case Enclosure 8, disk 20)
/opt/MegaRAID/MegaCli/MegaCli64 -PDInfo -PhysDrv [8:20] -a0
Rebuild disk (in this case Enclosure 8, disk 20)
/opt/MegaRAID/MegaCli/MegaCli64 -PDRbld -Start -PhysDrv [8:20] -a0
Misc. commands
Enable controller alarm
/opt/MegaRAID/MegaCli/MegaCli64 -AdpSetProp AlarmEnbl -aALL
Disable controller alarm
/opt/MegaRAID/MegaCli/MegaCli64 -AdpSetProp AlarmDsbl -aALL
Good cheat sheet of MegaCLI commands: http://www.damtp.cam.ac.uk/internal/computing/docs/public/megacli_raid_lsi.html
Install ZFS
yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release-1-2.el6.noarch.rpm yum install dkms gcc make kernel-devel perl yum install spl zfs chkconfig zfs on
Create ZFS Raid
zpool create -f tank /dev/sdf /dev/sdg /dev/sdh /dev/sdk /dev/sdj /dev/sdi /dev/sdl /dev/sdn /dev/sdm /dev/sdp /dev/sdo /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdx /dev/sdw /dev/sdy /dev/sdz /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae /dev/sdaf /dev/sdai /dev/sdah /dev/sdaj /dev/sdag /dev/sdal /dev/sdam /dev/sdak /dev/sdan /dev/sdaq /dev/sdao /dev/sdar /dev/sdat /dev/sdas /dev/sdav /dev/sdap /dev/sdau /dev/sdaw /dev/sdax /dev/sday zpool add tank spare /dev/sdd /dev/sdb zfs create tank/projects zfs set compression=lzjb tank/projects zfs set dedup=on tank/projects zfs set atime=off tank zfs set atime=off tank/projects
What is looks like
# zfs list NAME USED AVAIL REFER MOUNTPOINT tank 1.38M 164T 144K /tank tank/projects 136K 164T 136K /tank/projects
Common Tools
Check dedupe ratio
zpool get dedupratio tank
Snapshots
Running from cron
# ls -l /etc/cron.* |grep zfs -rw-r--r--. 1 root root 79 Nov 21 17:24 zfs-auto-snapshot.cron.daily -rw-r--r--. 1 root root 80 Nov 21 17:24 zfs-auto-snapshot.cron.hourly -rw-r--r--. 1 root root 81 Nov 21 17:26 zfs-auto-snapshot.cron.monthly -rw-r--r--. 1 root root 79 Nov 21 17:26 zfs-auto-snapshot.cron.weekly
These crons run
/usr/local/bin/zfs-auto-snapshot
Current snapshot schedule is to keep 12 months of snapshots. These are located in .zfs of each folder.
Display snapshots config
zpool get listsnapshots tank
List snapshots
zfs list -r -t snapshot -o name,creation tank zfs list -t snapshot
List space
zfs list -o space
Devices
Disk /dev/sdd: 4000.2 GB, 4000225165312 bytes
Disk /dev/sde: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdf: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdg: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdh: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdk: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdj: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdi: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdl: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdn: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdm: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdp: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdo: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdq: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdr: 4000.2 GB, 4000225165312 bytes
Disk /dev/sds: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdt: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdu: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdv: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdx: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdw: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdy: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdz: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdaa: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdab: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdac: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdad: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdae: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdaf: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdai: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdah: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdaj: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdag: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdal: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdam: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdak: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdan: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdaq: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdao: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdar: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdat: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdas: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdav: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdap: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdau: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdaw: 4000.2 GB, 4000225165312 bytes
Disk /dev/sdax: 4000.2 GB, 4000225165312 bytes
Disk /dev/sday: 4000.2 GB, 4000225165312 bytes)
No comments:
Post a Comment