Linux 常見 RAID 及軟 RAID 創建

来源:https://www.cnblogs.com/llife/archive/2019/08/25/11408941.html
-Advertisement-
Play Games

RAID 可以大幅度的提高磁碟性能,以及可靠性,這麼好的技術怎麼能不掌握呢!此篇介紹一些常見 RAID ,及其在 Linux 上的軟 RAID 創建方法。 ...


CentOS-Logo

RAID可以大幅度的提高磁碟性能,以及可靠性,這麼好的技術怎麼能不掌握呢!此篇介紹一些常見RAID,及其在Linux上的軟RAID創建方法。


mdadm

  • 創建軟RAID
mdadm -C -v /dev/創建的設備名 -l級別 -n數量 添加的磁碟 [-x數量 添加的熱備份盤]

-C:創建一個新的陣列--create
-v:顯示細節--verbose
-l:設定RAID級別--level=
-n:指定陣列中可用device數目--raid-devices=
-x:指定初始陣列的富餘device數目--spare-devices=,空閑盤(熱備磁碟)能在工作盤損壞後自動頂替

  • 查看詳細信息
mdadm -D /dev/設備名

-D:列印一個或多個md device的詳細信息--detail

  • 查看RAID的狀態
cat /proc/mdstat
  • 模擬損壞
mdadm -f /dev/設備名 磁碟

-f:模擬損壞fail

  • 移除損壞
mdadm -r /dev/設備名 磁碟

-r:移除remove

  • 添加新硬碟作為熱備份盤
mdadm -a /dev/設備名 磁碟

-a:添加add


RAID0

RAID0俗稱條帶,它將兩個或多個硬碟組成一個邏輯硬碟,容量是所有硬碟之和,因為是多個硬碟組合成一個,故可並行寫操作,寫入速度提高,但此方式硬碟數據沒有冗餘,沒有容錯,一旦一個物理硬碟損壞,則所有數據均丟失。因而,RAID0適合於對數據量大,但安全性要求不高的場景,比如音像、視頻文件的存儲等。

RAID0

實驗RAID0創建,格式化,掛載使用。

1.添加220G的硬碟,分區,類型IDfd

[root@localhost ~]# fdisk -l | grep raid
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect

2.創建RAID0

[root@localhost ~]# mdadm -C -v /dev/md0 -l0 -n2 /dev/sd{b,c}1
mdadm: chunk size defaults to 512K
mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

3.查看raidstat狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 sdc1[1] sdb1[0]
      41906176 blocks super 1.2 512k chunks

unused devices: <none>

4.查看RAID0的詳細信息。

[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Aug 25 15:28:13 2019
        Raid Level : raid0
        Array Size : 41906176 (39.96 GiB 42.91 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 15:28:13 2019
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : localhost:0  (local to host localhost)
              UUID : 7ff54c57:b99a59da:6b56c6d5:a4576ccf
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

5.格式化。

[root@localhost ~]# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

6.掛載使用。

[root@localhost ~]# mkdir /mnt/md0
[root@localhost ~]# mount /dev/md0 /mnt/md0/
[root@localhost ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        17G 1013M   16G   6% /
devtmpfs                devtmpfs  901M     0  901M   0% /dev
tmpfs                   tmpfs     912M     0  912M   0% /dev/shm
tmpfs                   tmpfs     912M  8.7M  904M   1% /run
tmpfs                   tmpfs     912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               xfs      1014M  143M  872M  15% /boot
tmpfs                   tmpfs     183M     0  183M   0% /run/user/0
/dev/md0                xfs        40G   33M   40G   1% /mnt/md0

RAID1

RAID1俗稱鏡像,它最少由兩個硬碟組成,且兩個硬碟上存儲的數據均相同,以實現數據冗餘。RAID1讀操作速度有所提高,寫操作理論上與單硬碟速度一樣,但由於數據需要同時寫入所有硬碟,實際上稍為下降。容錯性是所有組合方式里最好的,只要有一塊硬碟正常,則能保持正常工作。但它對硬碟容量的利用率則是最低,只有50%,因而成本也是最高。RAID1適合對數據安全性要求非常高的場景,比如存儲資料庫數據文件之類。

RAID1

實驗RAID1創建,格式化,掛載使用,故障模擬,重新添加熱備份。

1.添加320G的硬碟,分區,類型IDfd

[root@localhost ~]# fdisk -l | grep raid
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdd1            2048    41943039    20970496   fd  Linux raid autodetect

2.創建RAID1,並添加1個熱備份盤。

[root@localhost ~]# mdadm -C -v /dev/md1 -l1 -n2 /dev/sd{b,c}1 -x1 /dev/sdd1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md1 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

3.查看raidstat狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdd1[2](S) sdc1[1] sdb1[0]
      20953088 blocks super 1.2 [2/2] [UU]
      [========>............]  resync = 44.6% (9345792/20953088) finish=0.9min speed=203996K/sec

unused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdd1[2](S) sdc1[1] sdb1[0]
      20953088 blocks super 1.2 [2/2] [UU]

unused devices: <none>

4.查看RAID1的詳細信息。

[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Sun Aug 25 15:38:44 2019
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 15:39:24 2019
             State : clean, resyncing
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

     Resync Status : 40% complete

              Name : localhost:1  (local to host localhost)
              UUID : b921e8b3:a18e2fc9:11706ba4:ed633dfd
            Events : 6

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

       2       8       49        -      spare   /dev/sdd1

5.格式化。

[root@localhost ~]# mkfs.xfs /dev/md1
meta-data=/dev/md1               isize=512    agcount=4, agsize=1309568 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5238272, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

6.掛載使用。

[root@localhost ~]# mkdir /mnt/md1
[root@localhost ~]# mount /dev/md1 /mnt/md1/
[root@localhost ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        17G 1014M   16G   6% /
devtmpfs                devtmpfs  901M     0  901M   0% /dev
tmpfs                   tmpfs     912M     0  912M   0% /dev/shm
tmpfs                   tmpfs     912M  8.7M  904M   1% /run
tmpfs                   tmpfs     912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               xfs      1014M  143M  872M  15% /boot
tmpfs                   tmpfs     183M     0  183M   0% /run/user/0
/dev/md1                xfs        20G   33M   20G   1% /mnt/md1

7.創建測試文件。

[root@localhost ~]# touch /mnt/md1/test{1..9}.txt
[root@localhost ~]# ls /mnt/md1/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

8.故障模擬。

[root@localhost ~]# mdadm -f /dev/md1 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md1

9.查看測試文件。

[root@localhost ~]# ls /mnt/md1/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

10.查看狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdd1[2] sdc1[1] sdb1[0](F)
      20953088 blocks super 1.2 [2/1] [_U]
      [=====>...............]  recovery = 26.7% (5600384/20953088) finish=1.2min speed=200013K/sec

unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Sun Aug 25 15:38:44 2019
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 15:47:57 2019
             State : active, degraded, recovering
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 17% complete

              Name : localhost:1  (local to host localhost)
              UUID : b921e8b3:a18e2fc9:11706ba4:ed633dfd
            Events : 22

    Number   Major   Minor   RaidDevice State
       2       8       49        0      spare rebuilding   /dev/sdd1
       1       8       33        1      active sync   /dev/sdc1

       0       8       17        -      faulty   /dev/sdb1

11.再次查看狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdd1[2] sdc1[1] sdb1[0](F)
      20953088 blocks super 1.2 [2/2] [UU]

unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Sun Aug 25 15:38:44 2019
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 15:49:28 2019
             State : active
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost:1  (local to host localhost)
              UUID : b921e8b3:a18e2fc9:11706ba4:ed633dfd
            Events : 37

    Number   Major   Minor   RaidDevice State
       2       8       49        0      active sync   /dev/sdd1
       1       8       33        1      active sync   /dev/sdc1

       0       8       17        -      faulty   /dev/sdb1

12.移除損壞的磁碟

[root@localhost ~]# mdadm -r /dev/md1 /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md1
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Sun Aug 25 15:38:44 2019
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 15:52:57 2019
             State : active
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost:1  (local to host localhost)
              UUID : b921e8b3:a18e2fc9:11706ba4:ed633dfd
            Events : 38

    Number   Major   Minor   RaidDevice State
       2       8       49        0      active sync   /dev/sdd1
       1       8       33        1      active sync   /dev/sdc1

13.重新添加熱備份盤。

[root@localhost ~]# mdadm -a /dev/md1 /dev/sdb1
mdadm: added /dev/sdb1
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Sun Aug 25 15:38:44 2019
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 15:53:32 2019
             State : active
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

              Name : localhost:1  (local to host localhost)
              UUID : b921e8b3:a18e2fc9:11706ba4:ed633dfd
            Events : 39

    Number   Major   Minor   RaidDevice State
       2       8       49        0      active sync   /dev/sdd1
       1       8       33        1      active sync   /dev/sdc1

       3       8       17        -      spare   /dev/sdb1

RAID5

RAID5最少由三個硬碟組成,它將數據分散存儲於陣列中的每個硬碟,並且還伴有一個數據校驗位,數據位與校驗位通過演算法能相互驗證,當丟失其中的一位時,RAID控制器能通過演算法,利用其它兩位數據將丟失的數據進行計算還原。因而RAID5最多能允許一個硬碟損壞,有容錯性。RAID5相對於其它的組合方式,在容錯與成本方面有一個平衡,因而受到大多數使用者的歡迎。一般的磁碟陣列,最常使用的就是RAID5這種方式。

RAID5

實驗RAID5創建,格式化,掛載使用,故障模擬,重新添加熱備份。

1.添加420G的硬碟,分區,類型IDfd

[root@localhost ~]# fdisk -l | grep raid
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdd1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sde1            2048    41943039    20970496   fd  Linux raid autodetect

2.創建RAID5,並添加1個熱備份盤。

[root@localhost ~]# mdadm -C -v /dev/md5 -l5 -n3 /dev/sd[b-d]1 -x1 /dev/sde1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 20953088K
mdadm: Fail create md5 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.

3.查看raidstat狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
      41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [====>................]  recovery = 24.1% (5057340/20953088) finish=1.3min speed=202293K/sec

unused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
      41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

4.查看RAID5的詳細信息。

[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Sun Aug 25 16:13:44 2019
        Raid Level : raid5
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:15:29 2019
             State : clean
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost:5  (local to host localhost)
              UUID : a055094e:9adaff79:2edae9b9:0dcc3f1b
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       3       8       65        -      spare   /dev/sde1

5.格式化。

[root@localhost ~]# mkfs.xfs /dev/md5
meta-data=/dev/md5               isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

6.掛載使用。

[root@localhost ~]# mkdir /mnt/md5
[root@localhost ~]# mount /dev/md5 /mnt/md5/
[root@localhost ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        17G 1014M   16G   6% /
devtmpfs                devtmpfs  901M     0  901M   0% /dev
tmpfs                   tmpfs     912M     0  912M   0% /dev/shm
tmpfs                   tmpfs     912M  8.7M  904M   1% /run
tmpfs                   tmpfs     912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               xfs      1014M  143M  872M  15% /boot
tmpfs                   tmpfs     183M     0  183M   0% /run/user/0
/dev/md5                xfs        40G   33M   40G   1% /mnt/md5

7.創建測試文件。

[root@localhost ~]# touch /mnt/md5/test{1..9}.txt
[root@localhost ~]# ls /mnt/md5/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

8.故障模擬。

[root@localhost ~]# mdadm -f /dev/md5 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md5

9.查看測試文件。

[root@localhost ~]# ls /mnt/md5/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

10.查看狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3] sdc1[1] sdb1[0](F)
      41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
      [====>................]  recovery = 21.0% (4411136/20953088) finish=1.3min speed=210054K/sec

unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Sun Aug 25 16:13:44 2019
        Raid Level : raid5
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:21:31 2019
             State : clean, degraded, recovering
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 12% complete

              Name : localhost:5  (local to host localhost)
              UUID : a055094e:9adaff79:2edae9b9:0dcc3f1b
            Events : 23

    Number   Major   Minor   RaidDevice State
       3       8       65        0      spare rebuilding   /dev/sde1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       0       8       17        -      faulty   /dev/sdb1

11.再次查看狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3] sdc1[1] sdb1[0](F)
      41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Sun Aug 25 16:13:44 2019
        Raid Level : raid5
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:23:09 2019
             State : clean
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost:5  (local to host localhost)
              UUID : a055094e:9adaff79:2edae9b9:0dcc3f1b
            Events : 39

    Number   Major   Minor   RaidDevice State
       3       8       65        0      active sync   /dev/sde1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       0       8       17        -      faulty   /dev/sdb1

12.移除損壞的磁碟。

[root@localhost ~]# mdadm -r /dev/md5 /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md5
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Sun Aug 25 16:13:44 2019
        Raid Level : raid5
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:25:01 2019
             State : clean
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost:5  (local to host localhost)
              UUID : a055094e:9adaff79:2edae9b9:0dcc3f1b
            Events : 40

    Number   Major   Minor   RaidDevice State
       3       8       65        0      active sync   /dev/sde1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

13.重新添加熱備份盤。

[root@localhost ~]# mdadm -a /dev/md5 /dev/sdb1
mdadm: added /dev/sdb1
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Sun Aug 25 16:13:44 2019
        Raid Level : raid5
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:25:22 2019
             State : clean
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost:5  (local to host localhost)
              UUID : a055094e:9adaff79:2edae9b9:0dcc3f1b
            Events : 41

    Number   Major   Minor   RaidDevice State
       3       8       65        0      active sync   /dev/sde1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       5       8       17        -      spare   /dev/sdb1

RAID6

RAID6是在RAID5的基礎上改良而成的,RAID6再將數據校驗位增加一位,所以允許損壞的硬碟數量也由 RAID5的一個增加到二個。由於同一陣列中兩個硬碟同時損壞的概率非常少,所以,RAID6用增加一塊硬碟的代價,換來了比RAID5更高的數據安全性。

RAID6

實驗RAID6創建,格式化,掛載使用,故障模擬,重新添加熱備份。

1.添加620G的硬碟,分區,類型IDfd

[root@localhost ~]# fdisk -l | grep raid
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdd1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sde1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdf1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdg1            2048    41943039    20970496   fd  Linux raid autodetect

2.創建RAID6,並添加2個熱備份盤。

[root@localhost ~]# mdadm -C -v /dev/md6 -l6 -n4 /dev/sd[b-e]1 -x2 /dev/sd[f-g]1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 20953088K
mdadm: Fail create md6 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md6 started.

3.查看raidstat狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md6 : active raid6 sdg1[5](S) sdf1[4](S) sde1[3] sdd1[2] sdc1[1] sdb1[0]
      41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      [===>.................]  resync = 18.9% (3962940/20953088) finish=1.3min speed=208575K/sec

unused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md6 : active raid6 sdg1[5](S) sdf1[4](S) sde1[3] sdd1[2] sdc1[1] sdb1[0]
      41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

4.查看RAID6的詳細信息。

[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sun Aug 25 16:34:36 2019
        Raid Level : raid6
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:34:43 2019
             State : clean, resyncing
    Active Devices : 4
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 2

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

     Resync Status : 10% complete

              Name : localhost:6  (local to host localhost)
              UUID : 7c3d15a2:4066f2c6:742f3e4c:82aae1bb
            Events : 1

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

       4       8       81        -      spare   /dev/sdf1
       5       8       97        -      spare   /dev/sdg1

5.格式化。

[root@localhost ~]# mkfs.xfs /dev/md6
meta-data=/dev/md6               isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

6.掛載使用。

[root@localhost ~]# mkdir /mnt/md6
[root@localhost ~]# mount /dev/md6 /mnt/md6/
[root@localhost ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        17G 1014M   16G   6% /
devtmpfs                devtmpfs  901M     0  901M   0% /dev
tmpfs                   tmpfs     912M     0  912M   0% /dev/shm
tmpfs                   tmpfs     912M  8.7M  903M   1% /run
tmpfs                   tmpfs     912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               xfs      1014M  143M  872M  15% /boot
tmpfs                   tmpfs     183M     0  183M   0% /run/user/0
/dev/md6                xfs        40G   33M   40G   1% /mnt/md6

7.創建測試文件。

[root@localhost ~]# touch /mnt/md6/test{1..9}.txt
[root@localhost ~]# ls /mnt/md6/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

8.故障模擬。

[root@localhost ~]# mdadm -f /dev/md6 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md6
[root@localhost ~]# mdadm -f /dev/md6 /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md6

9.查看測試文件。

[root@localhost ~]# ls /mnt/md6/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

10.查看狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md6 : active raid6 sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1](F) sdb1[0](F)
      41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/2] [__UU]
      [====>................]  recovery = 23.8% (4993596/20953088) finish=1.2min speed=208066K/sec

unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sun Aug 25 16:34:36 2019
        Raid Level : raid6
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:41:09 2019
             State : clean, degraded, recovering
    Active Devices : 2
   Working Devices : 4
    Failed Devices : 2
     Spare Devices : 2

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 13% complete

              Name : localhost:6  (local to host localhost)
              UUID : 7c3d15a2:4066f2c6:742f3e4c:82aae1bb
            Events : 27

    Number   Major   Minor   RaidDevice State
       5       8       97        0      spare rebuilding   /dev/sdg1
       4       8       81        1      spare rebuilding   /dev/sdf1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

       0       8       17        -      faulty   /dev/sdb1
       1       8       33        -      faulty   /dev/sdc1

11.再次查看狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md6 : active raid6 sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1](F) sdb1[0](F)
      41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sun Aug 25 16:34:36 2019
        Raid Level : raid6
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:42:42 2019
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 2
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost:6  (local to host localhost)
              UUID : 7c3d15a2:4066f2c6:742f3e4c:82aae1bb
            Events : 46

    Number   Major   Minor   RaidDevice State
       5       8       97        0      active sync   /dev/sdg1
       4       8       81        1      active sync   /dev/sdf1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

       0       8       17        -      faulty   /dev/sdb1
       1       8       33        -      faulty   /dev/sdc1

12.移除損壞的磁碟。

[root@localhost ~]# mdadm -r /dev/md6 /dev/sd{b,c}1
mdadm: hot removed /dev/sdb1 from /dev/md6
mdadm: hot removed /dev/sdc1 from /dev/md6
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sun Aug 25 16:34:36 2019
        Raid Level : raid6
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:43:43 2019
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost:6  (local to host localhost)
              UUID : 7c3d15a2:4066f2c6:742f3e4c:82aae1bb
            Events : 47

    Number   Major   Minor   RaidDevice State
       5       8       97        0      active sync   /dev/sdg1
       4       8       81        1      active sync   /dev/sdf1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

13.重新添加熱備份盤。

[root@localhost ~]# mdadm -a /dev/md6 /dev/sd{b,c}1
mdadm: added /dev/sdb1
mdadm: added /dev/sdc1
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sun Aug 25 16:34:36 2019
        Raid Level : raid6
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:44:01 2019
             State : clean
    Active Devices : 4
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 2

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost:6  (local to host localhost)
              UUID : 7c3d15a2:4066f2c6:742f3e4c:82aae1bb
            Events : 49

    Number   Major   Minor   RaidDevice State
       5       8       97        0      active sync   /dev/sdg1
       4       8       81        1      active sync   /dev/sdf1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

       6       8       17        -      spare   /dev/sdb1
       7       8       33        -      spare   /dev/sdc1

RAID10

RAID10是先將數據進行鏡像操作,然後再對數據進行分組,RAID1在這裡就是一個冗餘的備份陣列,而RAID0則負責數據的讀寫陣列。至少要四塊盤,兩兩組合做RAID1,然後做RAID0RAID10對存儲容量的利用率和RAID1一樣低,只有50%Raid10方案造成了50%的磁碟浪費,但是它提供了200%的速度和單磁碟損壞的數據安全性,並且當同時損壞的磁碟不在同一RAID1中,就能保證數據安全性,RAID10能提供比RAID5更好的性能。這種新結構的可擴充性不好,使用此方案比較昂貴。

RAID10

實驗RAID10創建,格式化,掛載使用,故障模擬,重新添加熱備份。

1.添加420G的硬碟,分區,類型IDfd

[root@localhost ~]# fdisk -l | grep raid
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdd1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sde1            2048    41943039    20970496   fd  Linux raid autodetect

2.創建兩個RAID1,不添加熱備份盤。

[root@localhost ~]# mdadm -C -v /dev/md101 -l1 -n2 /dev/sd{b,c}1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md101 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md101 started.
[root@localhost ~]# mdadm -C -v /dev/md102 -l1 -n2 /dev/sd{d,e}1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md102 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md102 started.

3.查看raidstat狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md102 : active raid1 sde1[1] sdd1[0]
      20953088 blocks super 1.2 [2/2] [UU]
      [=========>...........]  resync = 48.4% (10148224/20953088) finish=0.8min speed=200056K/sec

md101 : active raid1 sdc1[1] sdb1[0]
      20953088 blocks super 1.2 [2/2] [UU]
      [=============>.......]  resync = 69.6% (14604672/20953088) finish=0.5min speed=200052K/sec

unused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md102 : active raid1 sde1[1] sdd1[0]
      20953088 blocks super 1.2 [2/2] [UU]

md101 : active raid1 sdc1[1] sdb1[0]
      20953088 blocks super 1.2 [2/2] [UU]

unused devices: <none>

4.查看兩個RAID1的詳細信息。

[root@localhost ~]# mdadm -D /dev/md101
/dev/md101:
           Version : 1.2
     Creation Time : Sun Aug 25 16:53:00 2019
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:53:58 2019
             State : clean, resyncing
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

     Resync Status : 62% complete

              Name : localhost:101  (local to host localhost)
              UUID : 80bb4fc5:1a628936:275ba828:17f23330
            Events : 9

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
[root@localhost ~]# mdadm -D /dev/md102
/dev/md102:
           Version : 1.2
     Creation Time : Sun Aug 25 16:53:23 2019
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:54:02 2019
             State : clean, resyncing
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

     Resync Status : 42% complete

              Name : localhost:102  (local to host localhost)
              UUID : 38abac72:74fa8a53:3a21b5e4:01ae64cd
            Events : 6

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       65        1      active sync   /dev/sde1

5.創建RAID10

[root@localhost ~]# mdadm -C -v /dev/md10 -l0 -n2 /dev/md10{1,2}
mdadm: chunk size defaults to 512K
mdadm: Fail create md10 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.

6.查看raidstat狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md102[1] md101[0]
      41871360 blocks super 1.2 512k chunks

md102 : active raid1 sde1[1] sdd1[0]
      20953088 blocks super 1.2 [2/2] [UU]

md101 : active raid1 sdc1[1] sdb1[0]
      20953088 blocks super 1.2 [2/2] [UU]

unused devices: <none>

7.查看RAID10的詳細信息。

[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Sun Aug 25 16:56:08 2019
        Raid Level : raid0
        Array Size : 41871360 (39.93 GiB 42.88 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:56:08 2019
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : localhost:10  (local to host localhost)
              UUID : 23c6abac:b131a049:db25cac8:686fb045
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       9      101        0      active sync   /dev/md101
       1       9      102        1      active sync   /dev/md102

8.格式化。

[root@localhost ~]# mkfs.xfs /dev/md10
meta-data=/dev/md10              isize=512    agcount=16, agsize=654208 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10467328, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5112, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

9.掛載使用。

[root@localhost ~]# mkdir /mnt/md10
[root@localhost ~]# mount /dev/md10 /mnt/md10/
[root@localhost ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        17G 1014M   16G   6% /
devtmpfs                devtmpfs  901M     0  901M   0% /dev
tmpfs                   tmpfs     912M     0  912M   0% /dev/shm
tmpfs                   tmpfs     912M  8.7M  903M   1% /run
tmpfs                   tmpfs     912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               xfs      1014M  143M  872M  15% /boot
tmpfs                   tmpfs     183M     0  183M   0% /run/user/0
/dev/md10               xfs        40G   33M   40G   1% /mnt/md10

10.創建測試文件。

[root@localhost ~]# touch /mnt/md10/test{1..9}.txt
[root@localhost ~]# ls /mnt/md10/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

11.故障模擬。

[root@localhost ~]# mdadm -f /dev/md101 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md101
[root@localhost ~]# mdadm -f /dev/md102 /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md102

12.查看測試文件。

[root@localhost ~]# ls /mnt/md10/
test1.txt  test2.txt  test3.txt  test4.txt  test5.txt  test6.txt  test7.txt  test8.txt  test9.txt

13.查看狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md102[1] md101[0]
      41871360 blocks super 1.2 512k chunks

md102 : active raid1 sde1[1] sdd1[0](F)
      20953088 blocks super 1.2 [2/1] [_U]

md101 : active raid1 sdc1[1] sdb1[0](F)
      20953088 blocks super 1.2 [2/1] [_U]

unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md101
/dev/md101:
           Version : 1.2
     Creation Time : Sun Aug 25 16:53:00 2019
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 17:01:11 2019
             State : clean, degraded
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost:101  (local to host localhost)
              UUID : 80bb4fc5:1a628936:275ba828:17f23330
            Events : 23

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       33        1      active sync   /dev/sdc1

       0       8       17        -      faulty   /dev/sdb1
[root@localhost ~]# mdadm -D /dev/md102
/dev/md102:
           Version : 1.2
     Creation Time : Sun Aug 25 16:53:23 2019
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 17:00:43 2019
             State : clean, degraded
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost:102  (local to host localhost)
              UUID : 38abac72:74fa8a53:3a21b5e4:01ae64cd
            Events : 19

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       65        1      active sync   /dev/sde1

       0       8       49        -      faulty   /dev/sdd1
[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Sun Aug 25 16:56:08 2019
        Raid Level : raid0
        Array Size : 41871360 (39.93 GiB 42.88 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 16:56:08 2019
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : localhost:10  (local to host localhost)
              UUID : 23c6abac:b131a049:db25cac8:686fb045
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       9      101        0      active sync   /dev/md101
       1       9      102        1      active sync   /dev/md102

14.移除損壞的磁碟。

[root@localhost ~]# mdadm -r /dev/md101 /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md101
[root@localhost ~]# mdadm -r /dev/md102 /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md102
[root@localhost ~]# mdadm -D /dev/md101
/dev/md101:
           Version : 1.2
     Creation Time : Sun Aug 25 16:53:00 2019
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 17:04:59 2019
             State : clean, degraded
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost:101  (local to host localhost)
              UUID : 80bb4fc5:1a628936:275ba828:17f23330
            Events : 26

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       33        1      active sync   /dev/sdc1
[root@localhost ~]# mdadm -D /dev/md102
/dev/md102:
           Version : 1.2
     Creation Time : Sun Aug 25 16:53:23 2019
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 17:05:07 2019
             State : clean, degraded
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost:102  (local to host localhost)
              UUID : 38abac72:74fa8a53:3a21b5e4:01ae64cd
            Events : 20

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       65        1      active sync   /dev/sde1

15.重新添加熱備份盤。

[root@localhost ~]# mdadm -a /dev/md101 /dev/sdb1
mdadm: added /dev/sdb1
[root@localhost ~]# mdadm -a /dev/md102 /dev/sdd1
mdadm: added /dev/sdd1

16.再次查看狀態。

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md102[1] md101[0]
      41871360 blocks super 1.2 512k chunks

md102 : active raid1 sdd1[2] sde1[1]
      20953088 blocks super 1.2 [2/1] [_U]
      [====>................]  recovery = 23.8% (5000704/20953088) finish=1.2min speed=208362K/sec

md101 : active raid1 sdb1[2] sdc1[1]
      20953088 blocks super 1.2 [2/1] [_U]
      [======>..............]  recovery = 32.0% (6712448/20953088) finish=1.1min speed=203407K/sec

unused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md102[1] md101[0]
      41871360 blocks super 1.2 512k chunks

md102 : active raid1 sdd1[2] sde1[1]
      20953088 blocks super 1.2 [2/2] [UU]

md101 : active raid1 sdb1[2] sdc1[1]
      20953088 blocks super 1.2 [2/2] [UU]

unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md101
/dev/md101:
           Version : 1.2
     Creation Time : Sun Aug 25 16:53:00 2019
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 17:07:28 2019
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost:101  (local to host localhost)
              UUID : 80bb4fc5:1a628936:275ba828:17f23330
            Events : 45

    Number   Major   Minor   RaidDevice State
       2       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
[root@localhost ~]# mdadm -D /dev/md102
/dev/md102:
           Version : 1.2
     Creation Time : Sun Aug 25 16:53:23 2019
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun Aug 25 17:07:36 2019
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost:102  (local to host localhost)
              UUID : 38abac72:74fa8a53:3a21b5e4:01ae64cd
            Events : 39

    Number   Major   Minor   RaidDevice State
       2       8       49        0      active sync   /dev/sdd1
       1       8       65        1      active sync   /dev/sde1

常用 RAID 間比較

名稱 硬碟數量 容量/利用率 讀性能 寫性能 數據冗餘
RAID0 N N塊總和 N倍 N倍 無,一個故障,丟失所有數據
RAID1 N(偶數) 50% 寫兩個設備,允許一個故障
RAID5 N≥3 (N-1)/N ↑↑ 計算校驗,允許一個故障
RAID6 N≥4 (N-2)/N ↑↑ ↓↓ 雙重校驗,允許兩個故障
RAID10 N(偶數,N≥4) 50% (N/2)倍 (N/2)倍 允許基組中的磁碟各損壞一個

一些話

此篇涉及到的操作很簡單,但是,有很多的查看占用了大量的篇幅,看關鍵點,過程都是一個套路,都是重覆的。


您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • 首先我要說一下自己對自由的理解: 自由是我可以選擇不幹什麼,但我要保留我可以乾什麼的可能性。 比如說我現在只有一個碼農的角色,但我仍然要保留可以扮演其他角色的可能, 比如成為一個作者,當我寫下文章的時候已經是了,所以是知名作者(這就是努力的意義啦), 又比如我想成為一個好的架構師和好的管理者,至少在 ...
  • 出於學習的目的,然後就寫了這個 下載地址 https://gitee.com/youlicc/a_simple_reptile 下圖,效果就是這樣... 總結分析 CreateRequest.cs這個類是我自己寫的。 這個基礎類是在github上找的,地址我忘了。使用理由:這份代碼搭建了基類模型。( ...
  • 用VS2019打開VS2015創建的MVC項目時所有引用全部失效: 解決方案: 打開項目的csproj文件,刪除 Target節點,在重新打開項目。 ...
  • 一. 什麼是路由? 一種URL(統一資源定位符)的體現方式,將URL映射到方法的調用。 轉變觀念:(URL未必是指Web伺服器上的靜態資源文件)。 二. 為什麼使用路由? 如同規範的代碼縮進一樣,這是對代碼質量的提高,是對URL的重視; 消除必須使用物理文件映射 URL 的弊端; 防止註入式攻擊,提 ...
  • VAR 是 .net 3.5 新出的一個定義變數的類型 其實也就是弱化類型的定義 VAR 可以替代任何類型,編譯器會根據上下文來判斷你到底用是想用什麼類型的 至於什麼情況下用到 VAR 就是你自己無法確定自己將用的是什麼類型,在你自己也都可使用 VAR VAR類型比 object 類型效率高 使用 ...
  • 大家都明白,程式員寫出的程式與用戶直接使用的程式之間還有一個簡單的環節,就是打包。今天就簡單介紹下用InstallShield 2015打包工具進行程式的打包, 有興趣的可以看看! 首先前面安裝打包工具的過程就不過多的介紹了,網上的資源很多自行下載就行了。我們從安裝完成後開始說! 安裝完成後界面出現 ...
  • 今天遇到一個這樣的問題,假設父進程有一個變數S,fork後,子進程中的變數S地址是否和父進程中的變數S 是相同的? 再學操作系統的時候,知道fork是採用的寫時複製,所以如果子進程或者父進程不對變數S做修改的話,它們應該 是指向同一塊物理記憶體,如果有修改,那麼它們會指向不同的物理記憶體,但虛擬記憶體地址 ...
  • CentOS下安裝軟體,要看下載的軟體安裝包的尾碼名是什麼,一般為了方便安裝,推薦下載以 rpm 結尾的軟體包。 比如以下截圖,有多種下載方式,推薦下載圈起來的鏈接。 rpm包安裝方式步驟: 找到相應的軟體包,比如soft.version.rpm,下載到本機某個目錄; cd soft.version ...
一周排行
    -Advertisement-
    Play Games
  • 移動開發(一):使用.NET MAUI開發第一個安卓APP 對於工作多年的C#程式員來說,近來想嘗試開發一款安卓APP,考慮了很久最終選擇使用.NET MAUI這個微軟官方的框架來嘗試體驗開發安卓APP,畢竟是使用Visual Studio開發工具,使用起來也比較的順手,結合微軟官方的教程進行了安卓 ...
  • 前言 QuestPDF 是一個開源 .NET 庫,用於生成 PDF 文檔。使用了C# Fluent API方式可簡化開發、減少錯誤並提高工作效率。利用它可以輕鬆生成 PDF 報告、發票、導出文件等。 項目介紹 QuestPDF 是一個革命性的開源 .NET 庫,它徹底改變了我們生成 PDF 文檔的方 ...
  • 項目地址 項目後端地址: https://github.com/ZyPLJ/ZYTteeHole 項目前端頁面地址: ZyPLJ/TreeHoleVue (github.com) https://github.com/ZyPLJ/TreeHoleVue 目前項目測試訪問地址: http://tree ...
  • 話不多說,直接開乾 一.下載 1.官方鏈接下載: https://www.microsoft.com/zh-cn/sql-server/sql-server-downloads 2.在下載目錄中找到下麵這個小的安裝包 SQL2022-SSEI-Dev.exe,運行開始下載SQL server; 二. ...
  • 前言 隨著物聯網(IoT)技術的迅猛發展,MQTT(消息隊列遙測傳輸)協議憑藉其輕量級和高效性,已成為眾多物聯網應用的首選通信標準。 MQTTnet 作為一個高性能的 .NET 開源庫,為 .NET 平臺上的 MQTT 客戶端與伺服器開發提供了強大的支持。 本文將全面介紹 MQTTnet 的核心功能 ...
  • Serilog支持多種接收器用於日誌存儲,增強器用於添加屬性,LogContext管理動態屬性,支持多種輸出格式包括純文本、JSON及ExpressionTemplate。還提供了自定義格式化選項,適用於不同需求。 ...
  • 目錄簡介獲取 HTML 文檔解析 HTML 文檔測試參考文章 簡介 動態內容網站使用 JavaScript 腳本動態檢索和渲染數據,爬取信息時需要模擬瀏覽器行為,否則獲取到的源碼基本是空的。 本文使用的爬取步驟如下: 使用 Selenium 獲取渲染後的 HTML 文檔 使用 HtmlAgility ...
  • 1.前言 什麼是熱更新 游戲或者軟體更新時,無需重新下載客戶端進行安裝,而是在應用程式啟動的情況下,在內部進行資源或者代碼更新 Unity目前常用熱更新解決方案 HybridCLR,Xlua,ILRuntime等 Unity目前常用資源管理解決方案 AssetBundles,Addressable, ...
  • 本文章主要是在C# ASP.NET Core Web API框架實現向手機發送驗證碼簡訊功能。這裡我選擇是一個互億無線簡訊驗證碼平臺,其實像阿裡雲,騰訊雲上面也可以。 首先我們先去 互億無線 https://www.ihuyi.com/api/sms.html 去註冊一個賬號 註冊完成賬號後,它會送 ...
  • 通過以下方式可以高效,並保證數據同步的可靠性 1.API設計 使用RESTful設計,確保API端點明確,並使用適當的HTTP方法(如POST用於創建,PUT用於更新)。 設計清晰的請求和響應模型,以確保客戶端能夠理解預期格式。 2.數據驗證 在伺服器端進行嚴格的數據驗證,確保接收到的數據符合預期格 ...