1.설치
[root@TEST01 ~]# yum provides /sbin/mdadm
2. 구성 확인
▶ 시스템에 현재 구성된 레이드 노드를 확인
[root@TEST01 ~]# ls -la /dev/md*
brw-r----- 1 root disk 9, 0 Dec 21 08:27 /dev/md0
1) 각 장치 유형별 예약된 Major Number 확인
[root@centos7 ~]# cat /proc/devices
2) OS 이하 기 생성된 장치별 Major / Minor Number 확인
[root@centos7 ~]# cat /proc/diskstats
▶ mdadm 명령으로 해당 디바이스가 잘 구성되었는지 스캔
[root@TEST01 ~]# mdadm --detail --scan
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=1a87ce48:f0917a06:4120df55:7dfe1c95
devices=/dev/sdb1,/dev/sdc1
3. 구성
▶ mdadm 명령으로 레이드 레벨을 구성하고 해당 디바이스를 구성
root@TEST01 ~]# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: /dev/sdb1 appears to contain an ext2fs file system
size=2096448K mtime=Wed Dec 21 07:42:55 2005
Continue creating array?
Continue creating array? (y/n)
Continue creating array? (y/n) y
mdadm: array /dev/md1 started.
▶ RAID 1 레벨로 구성된 레이드 노드 디바이스를 ext3로 포멧
[root@TEST01 ~]# mkfs.ext3 /dev/md1
mke2fs 1.37 (21-Mar-2005)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
262144 inodes, 524096 blocks
26204 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 32 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
▶ 레이드가 구성된 /dev/md1 디바이스를 raid_disk 라는 디렉토리로 mount
[root@TEST01 ~]# mkdir /raid_disk
[root@TEST01 ~]# mount /dev/md1 /raid_disk
[root@TEST01 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda7 988088 217296 719788 24% /
/dev/sda1 147764 11319 128816 9% /boot
/dev/shm 62892 0 62892 0% /dev/shm
/dev/sda9 8232040 51548 7755576 1% /home
/dev/sda6 988088 17676 919408 2% /opt
/dev/sda2 497861 11576 460581 3% /tmp
/dev/sda3 7936288 6772748 753884 90% /usr
/dev/sda8 497829 160064 312063 34% /var
/dev/md1 2063440 35880 1922744 2% /raid_disk
4. trouble shooting
▶ 디스크를 하나 제거후 status 메시지 확인후 다시 디스크를 추가하는 작업
[root@TEST01 ~]# mdadm --detail --scan
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=1a87ce48:f0917a06:4120df55:7dfe1c95
devices=/dev/sdb1
[root@TEST01 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb1[0]
2096384 blocks [2/1] [U_]
[root@TEST01 ~]# mdadm --detail --scan
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=1a87ce48:f0917a06:4120df55:7dfe1c95
devices=/dev/sdb1
▶ mdadm 명령으로 새로 추가된 디스크를 노드에 재 삽입
[root@TEST01 ~]# mdadm /dev/md1 --add /dev/sdc1
mdadm: hot added /dev/sdc1
[root@TEST01 ~]# mdadm --detail --scan
ARRAY /dev/md1 level=raid1 num-devices=2 spares=1 UUID=1a87ce48:f0917a06:4120df55:7dfe1c95
devices=/dev/sdb1,/dev/sdc1