How to Add a New Hard Drive to a Linux Software RAID (CentOS & Fedora)

Getting Started

A while back I setup a 1TB hard drive for backing up my primary file server. That 1TB drive was setup in a software RAID, a RAID 1 mirror to be exact. However I opted at the time to only add the one drive to the RAID, mainly because I only had just the one. Well I finally got around to getting a 2nd 1TB drive and needed to add it as another member of the RAID. These are the steps that I used to accomplish it.

Preexisting RAID

Now before we get started here’s what the RAID setup looks like. NOTE: Throughout this article I’ll be making use of the command mdadm. This command is the primary tool for managing a software RAID under Linux. Also notice that my RAID is identified by /dev/md0. Typically the software RAIDs are identified as /dev/mdX, where X is a number.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# Existing RAID setup
 
% mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Wed Dec 16 22:55:51 2009
     Raid Level : raid1
     Array Size : 976759936 (931.51 GiB 1000.20 GB)
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
   Raid Devices : 1
  Total Devices : 1
Preferred Minor : 0
    Persistence : Superblock is persistent
 
    Update Time : Sun Feb 19 04:22:02 2012
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0
 
           UUID : 1f1b36fd:ce3d589e:6a89fc71:2e8f3e64
         Events : 0.222
 
    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1

Nothing amazing here. We have 1 device, the RAID is clean and the RAID device is /dev/sdc1.

Installing the 2nd HDD (Hard Disk Drive)

The next step was to power down the system, install the 2nd HDD, and reboot. Once the system detected the new drive I confirmed that it was detected cleanly by the Linux kernel.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# visually inspect the dmesg log
 
% dmesg
...
...
 
### drive #1 (existing HDD)
 
ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata5.00: ATA-8: Hitachi HDS721010CLA332, JP4OA3MA, max UDMA/133
ata5.00: 1953525168 sectors, multi 0: LBA48 NCQ (depth 31/32)
ata5.00: configured for UDMA/133
 
...
...
 
### drive #2 (newly added HDD)
 
ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata6.00: ATA-8: Hitachi HDT721010SLA360, ST6OA31B, max UDMA/133
ata6.00: 1953525168 sectors, multi 0: LBA48 NCQ (depth 31/32)
ata6.00: configured for UDMA/133

Good so the hardware was detected by the kernel. Now let’s see if the kernel was able to assign a device handle to the newly installed HDD.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# check for /dev/sdX device handles
 
% dmesg
...
...
 
### drive #1 (/dev/sdc)
 
  Vendor: ATA       Model: Hitachi HDT72101  Rev: ST6O
  Type:   Direct-Access                      ANSI SCSI revision: 05
SCSI device sdc: 1953525168 512-byte hdwr sectors (1000205 MB)
sdc: Write Protect is off
sdc: Mode Sense: 00 3a 00 00
SCSI device sdc: drive cache: write back
SCSI device sdc: 1953525168 512-byte hdwr sectors (1000205 MB)
sdc: Write Protect is off
sdc: Mode Sense: 00 3a 00 00
SCSI device sdc: drive cache: write back
 sdc:<6>usb 4-2.3: configuration #1 chosen from 1 choice
 sdc1
sd 5:0:0:0: Attached scsi disk sdc
 
...
...
 
### drive #2 (/dev/sdb) <--- newly added!!
 
  Vendor: ATA       Model: Hitachi HDS72101  Rev: JP4O
  Type:   Direct-Access                      ANSI SCSI revision: 05
SCSI device sdb: 1953525168 512-byte hdwr sectors (1000205 MB)
sdb: Write Protect is off
sdb: Mode Sense: 00 3a 00 00
SCSI device sdb: drive cache: write back
SCSI device sdb: 1953525168 512-byte hdwr sectors (1000205 MB)
sdb: Write Protect is off
sdb: Mode Sense: 00 3a 00 00
SCSI device sdb: drive cache: write back
 sdb: unknown partition table
sd 4:0:0:0: Attached scsi disk sdb

Great so the kernel has assigned the device handle /dev/sdb to our new HDD. Now let’s partition it and add it to the software RAID.

Partitioning

For this next step we’re going to use fdisk to create a single primary partition on the 2nd HDD. The type of the partition needs to be Linux raid autodetect.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# fdisk -l output
 
% fdisk -l
...
...
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
Disk /dev/sdb doesn't contain a valid partition table
 
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      121601   976760001   fd  Linux raid autodetect
 
Disk /dev/md0: 1000.2 GB, 1000202174464 bytes
2 heads, 4 sectors/track, 244189984 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
 
Disk /dev/md0 doesn't contain a valid partition table

So this is good confirmation that our disk, /dev/sdb, is being detected correctly, here we can see that it doesn’t have any partitions on it as of yet. So let’s go ahead and create a partition.

1
2
3
4
5
6
# fdisk /dev/sdb (add a single partition for RAID)
 
% fdisk /dev/sdb
...
...
Command (m for help):

At the prompt, type n and hit return. This will start the create new partition process.

1
2
3
# make a new partition
 
Command (m for help): n

We’re going to be creating a primary partition, number 1. From here on we’ll be going with the default selections.

1
2
3
4
5
6
7
8
9
10
11
12
# partition info
...
...
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-121601, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-121601, default 121601): 
Using default value 121601

Let’s confirm by printing out the newly created partition’s info.

1
2
3
4
5
6
7
8
9
10
# check out the new partition
 
Command (m for help): p
 
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      121601   976760001   83  Linux

Now we need to change the partition’s type, again it needs to be Linux raid autodetect, and re-confirm that it’s correct using the p command.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# change the type to RAID
 
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
 
# check out the partition again
 
Command (m for help): p
 
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      121601   976760001   fd  Linux raid autodetect

Looks good so let’s save it, and make it stick.

1
2
3
4
5
6
7
# save it
 
Command (m for help): w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.

Now when we run the fdisk -l command we should see the following.

1
2
3
4
5
6
7
8
9
10
11
12
13
# check the results again with fdisk -l
 
i% fdisk -l
...
...
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      121601   976760001   fd  Linux raid autodetect
...
...

Great, so now we need to add the drive to the RAID array, /dev/md0.

Adding the Drive to the RAID Array

This step is a little anti-climactic.

1
2
3
4
# add the new drive (/dev/sdb1) to RAID /dev/md0
 
% mdadm --add /dev/md0 /dev/sdb1
mdadm: added /dev/sdb1

The disk’s partition /dev/sdb1, is now part of the RAID array! Now let’s check on the RAID array again.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# check on the add (/proc/mdstat)
 
% cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sdb1[1](S) sdc1[0]
      976759936 blocks [1/1] [U]
 
unused devices: <none>
 
# check on the add (mdadm --details)
 
% mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Wed Dec 16 22:55:51 2009
     Raid Level : raid1
     Array Size : 976759936 (931.51 GiB 1000.20 GB)
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
   Raid Devices : 1
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent
 
    Update Time : Sun Feb 19 04:22:02 2012
          State : clean
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1
 
           UUID : 1f1b36fd:ce3d589e:6a89fc71:2e8f3e64
         Events : 0.222
 
    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
 
       1       8       17        -      spare   /dev/sdb1

So now we’ve added the drive to the RAID array. But hold on. The drive is showing up as a spare, not really what I had in mind. I wanted it to be an active member so there’s one more thing that I need to do. We need to make the RAID array actively use the newest member.

1
2
3
# telling the RAID to grow
 
% mdadm --grow /dev/md0 --raid-devices=2

Now if we re-check the RAID, things should look better.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# checking the changes
 
% mdadm --detail /dev/md0 
/dev/md0:
        Version : 0.90
  Creation Time : Wed Dec 16 22:55:51 2009
     Raid Level : raid1
     Array Size : 976759936 (931.51 GiB 1000.20 GB)
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent
 
    Update Time : Wed Feb 22 21:30:56 2012
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1
 
 Rebuild Status : 0% complete
 
           UUID : 1f1b36fd:ce3d589e:6a89fc71:2e8f3e64
         Events : 0.224
 
    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       2       8       17        1      spare rebuilding   /dev/sdb1

And don’t get too excited about the messages where the new disk is still showing up as a spare. This is how the Software RAID tracks a disk until its had a chance to be completely synchronized with the rest of the RAID array.

Watching the Sync

You can use the following commands to keep an eye on the sync as it runs.

  • cat /proc/mdstat
  • mdadm –detail /dev/md0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# watch the rebuild
 
# METHOD #1
 
% cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sdb1[2] sdc1[0]
      976759936 blocks [2/1] [U_]
      [>....................]  recovery =  2.8% (27664512/976759936) finish=136.2min speed=116122K/sec
 
unused devices: <none>
 
# METHOD #2
% mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Wed Dec 16 22:55:51 2009
     Raid Level : raid1
     Array Size : 976759936 (931.51 GiB 1000.20 GB)
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent
 
    Update Time : Wed Feb 22 21:30:56 2012
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1
 
 Rebuild Status : 54% complete
 
           UUID : 1f1b36fd:ce3d589e:6a89fc71:2e8f3e64
         Events : 0.224
 
    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       2       8       17        1      spare rebuilding   /dev/sdb1

Done?

Those same 2 commands above will let you know when the sync has completed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# are we done?
 
# METHOD #1
 
% cat /proc/mdstat 
Personalities : [raid1] 
md0 : active raid1 sdb1[1] sdc1[0]
      976759936 blocks [2/2] [UU]
 
unused devices: <none>
 
# METHOD #2
 
% mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Wed Dec 16 22:55:51 2009
     Raid Level : raid1
     Array Size : 976759936 (931.51 GiB 1000.20 GB)
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent
 
    Update Time : Thu Feb 23 00:34:06 2012
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
 
           UUID : 1f1b36fd:ce3d589e:6a89fc71:2e8f3e64
         Events : 0.226
 
    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       17        1      active sync   /dev/sdb1

References

links

general RAID info

growing a RAID

extending a RAID

local copies

general RAID info

growing a RAID

This entry was posted in centos, fedora, RAID, redhat, rhel, Syndicated, sysadmin, tutorials. Bookmark the permalink.

Comments are closed.