Ir al contenido

Posts from the ‘Qnap’ Category

5
Jun

How To Troubleshoot a Broken RAID Volume On a QNAP Storage Device

Something like four weeks ago I had a major issue with my QNAP TS-459 Pro storage device. I did a simple firmware upgrade and whoop my RAID0 volume was gone, oops!

So I was left with three stand alone disks as shown on the screenshot above. The two WDC VelociRaptor disks were supposed to form a single RAID0 volume, well it used to be just before the firmware upgrade…

As an old saying goes, if your data is important backup once, if your data is critical backup twice. My data is important and I had everything rsync’ed on another QNAP TS-639 Pro storage device but still it is pain in the a***. By the way did you know that with the latest beta firmware, your QNAP device can copy your data to a SaaS (Storage as a Service) provider in the Cloud, cool isn’t it :)

It took me some time to recover my broken RAID0 volume and many trial and errors. Hopefully I had a backup and a lot of spare time thus I could play around with the device. Fist thing I tried is a restore of the latest known working backup of my QNAP storage device. The restore process went flawlessly but upon reboot, I had the same problem, my RAID0 volume was still gone.

I SSH’ed in the QNAP device and triggered some mdadm commands as shown on the screenshot below.

mdadm -E /dev/md0 confirmed the issue, no RAID0 volume even though I did a restore of the QNAP’s configuration settings.

Whilst mdadm -E /dev/sda3 showed me that a superblock was available for /dev/sda3, that wasn’t the case for /dev/sdb3. That’s not good at all :(

/proc/mdstat confirmed that the restore was useless, no /dev/md0 declared in that configuration file…

Well the restore was partially helpful. Look at the screenshots above, in the logical volumes panel, you see that a stripping volume containing disk 1 and 2 was declared but not active. And on the second screenshot, the striping disk volume was unmounted as a result.

I tried to re-build the stripping configuration with the command: mdadm –build c 64 -l 0 -n 2 /dev/md0 /dev/sda3 /dev/sdb3 and /dev/md0 was successfully appended to /proc/mdstat

 

I tried a mount /dev/md0 but that did not work out. I checked /etc/mtab for /dev/md0 but could not find it. It was not exported as I would expect… At this stage I decided that a reboot was necessary…

The device was back up and I checked in the Volume Management panel for the status of the striping volume and it was still not active but this time the File System column showed EXT3. The Check Now button might help to recover the striping volume to a healthy state, let’s try that and indeed it was a success as shown on the two screenshots below. The Check Now button fixed the /dev/md0 entry in /etc/mtab and the status was now as active. Hurray!

Unfortunately the happiness was short because as soon the QNAP device rebooted, the striping volume configuration settings were gone. Actually I could see that the superblock, that is the portion of the disks part of a RAID set, where the parameters that define a software RAID volume, was not persistent, that is was not written in a superblock as shown in the screenshot below.

I had to face it, I would not be able to recover my striping disk volume thus I decided that it was time to re-create it from scratch and copy back my data. So I cleared any RAID volume settings from the device and while at it, I decided to evaluate the sweet spot of the chunk size especially for a RAID0 volume.

That reminds me that even if technology is getting better and better, still it is not error free and shit happens. Hopefully no data was lost for good and I could get my home lab back up with an even faster RAID0 volume as before the crash, thanks to the new chunk size

Obtained from this link

5
Jun

QNAP comandos útiles con ejemplos

more /proc/mdstat

fdisk -l

mdadm -E /dev/sda3

mdadm -E /dev/sdb3

mdadm -E /dev/sdc3

mdadm -E /dev/sdd3

Para 8 discos seguimos con mdadm -E /dev/sde3 , mdadm -E /dev/sdf3 , mdadm -E /dev/sd3 , mdadm -E /dev/sdh3 , y si son más, seguimos.

mdadm -D /dev/md0

cat /etc/raidtab

Ejemplos:

Con problemas, el raid 5 de 4 discos no aparece

[~] # more /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md4 : active raid1 sdd2[2](S) sdc2[1] sda2[0]
530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdd4[2] sdc4[1]
458880 blocks [4/3] [UUU_]
bitmap: 40/57 pages [160KB], 4KB chunk

md9 : active raid1 sda1[0] sdd1[2] sdc1[1]
530048 blocks [4/3] [UUU_]
bitmap: 39/65 pages [156KB], 4KB chunk

unused devices:
[~] #

[~] # fdisk -l

Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 66 530125 83 Linux
/dev/sdd2 67 132 530142 83 Linux
/dev/sdd3 133 243138 1951945693 83 Linux
/dev/sdd4 243139 243200 498012 83 Linux

Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 66 530125 83 Linux
/dev/sdc2 67 132 530142 83 Linux
/dev/sdc3 133 243138 1951945693 83 Linux
/dev/sdc4 243139 243200 498012 83 Linux
You must set cylinders.
You can do this from the extra functions menu.

Disk /dev/sdya: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdya1 1 267350 2147483647+ ee EFI GPT
Partition 1 has different physical/logical beginnings (non-Linux?):
phys=(0, 0, 1) logical=(0, 0, 2)
Partition 1 has different physical/logical endings:
phys=(1023, 254, 63) logical=(267349, 89, 4)
You must set cylinders.
You can do this from the extra functions menu.

Disk /dev/sda: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee EFI GPT
Partition 1 has different physical/logical beginnings (non-Linux?):
phys=(0, 0, 1) logical=(0, 0, 2)
Partition 1 has different physical/logical endings:
phys=(1023, 254, 63) logical=(267349, 89, 4)

Disk /dev/sda4: 469 MB, 469893120 bytes
2 heads, 4 sectors/track, 114720 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/sda4 doesn’t contain a valid partition table

Disk /dev/sdx: 515 MB, 515899392 bytes
8 heads, 32 sectors/track, 3936 cylinders
Units = cylinders of 256 * 512 = 131072 bytes

Device Boot Start End Blocks Id System
/dev/sdx1 1 17 2160 83 Linux
/dev/sdx2 18 1910 242304 83 Linux
/dev/sdx3 1911 3803 242304 83 Linux
/dev/sdx4 3804 3936 17024 5 Extended
/dev/sdx5 3804 3868 8304 83 Linux
/dev/sdx6 3869 3936 8688 83 Linux

Disk /dev/md9: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md9 doesn’t contain a valid partition table

Disk /dev/md4: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md4 doesn’t contain a valid partition table
[~] #

[~] # mdadm -E /dev/sda3
/dev/sda3:
Magic : a92b4efc
Version : 00.90.00
UUID : d46b73cf:b63f7efd:94338b81:55ba1b4a
Creation Time : Thu Dec 23 13:21:36 2010
Raid Level : raid5
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0

Update Time : Wed Jun 5 21:07:41 2013
State : active
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Checksum : 31506936 – correct
Events : 0.7364756

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 4 8 3 4 spare /dev/sda3

0 0 0 0 0 removed
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 3 4 spare /dev/sda3
[~] #

[~] # mdadm -E /dev/sdb3
mdadm: cannot open /dev/sdb3: No such device or address
[~] #

[~] # mdadm -E /dev/sdc3
/dev/sdc3:
Magic : a92b4efc
Version : 00.90.00
UUID : d46b73cf:b63f7efd:94338b81:55ba1b4a
Creation Time : Thu Dec 23 13:21:36 2010
Raid Level : raid5
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0

Update Time : Wed Jun 5 21:07:41 2013
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Checksum : 31506959 – correct
Events : 0.7364756

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 2 8 35 2 active sync /dev/sdc3

0 0 0 0 0 removed
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 3 4 spare /dev/sda3
[~] #

[~] # mdadm -E /dev/sdd3
/dev/sdd3:
Magic : a92b4efc
Version : 00.90.00
UUID : d46b73cf:b63f7efd:94338b81:55ba1b4a
Creation Time : Thu Dec 23 13:21:36 2010
Raid Level : raid5
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0

Update Time : Wed Jun 5 21:07:41 2013
State : active
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Checksum : 3150696a – correct
Events : 0.7364756

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 3 8 51 3 active sync /dev/sdd3

0 0 0 0 0 removed
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 3 4 spare /dev/sda3
[~] #

[~] # mdadm -D /dev/md0
mdadm: md device /dev/md0 does not appear to be active.
[~] #

[~] # cat /etc/raidtab
raiddev /dev/md0
raid-level 5
nr-raid-disks 4
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
device /dev/sdc3
raid-disk 2
device /dev/sdd3
raid-disk 3
[~] #

En proceso de reconstrucción de un raid 5 de 4 discos

[~] # more /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sda3[4] sdd3[3] sdc3[2] sdb3[1]
8786092800 blocks level 5, 64k chunk, algorithm 2 [4/3] [_UUU]
[=====>……………] recovery = 26.1% (766015884/2928697600) finish=1054.1min speed=34193K/sec

md4 : active raid1 sda2[2](S) sdd2[0] sdc2[3](S) sdb2[1]
530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdc4[3] sdd4[2] sdb4[1]
458880 blocks [4/4] [UUUU]
bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
530048 blocks [4/4] [UUUU]
bitmap: 1/65 pages [4KB], 4KB chunk

unused devices:
[~] #

~] # fdisk -l
You must set cylinders.
You can do this from the extra functions menu.

Disk /dev/sdd: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 267350 2147483647+ ee EFI GPT
Partition 1 has different physical/logical beginnings (non-Linux?):
phys=(0, 0, 1) logical=(0, 0, 2)
Partition 1 has different physical/logical endings:
phys=(1023, 254, 63) logical=(267349, 89, 4)
You must set cylinders.
You can do this from the extra functions menu.

Disk /dev/sdc: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 267350 2147483647+ ee EFI GPT
Partition 1 has different physical/logical beginnings (non-Linux?):
phys=(0, 0, 1) logical=(0, 0, 2)
Partition 1 has different physical/logical endings:
phys=(1023, 254, 63) logical=(267349, 89, 4)
You must set cylinders.
You can do this from the extra functions menu.

Disk /dev/sdb: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 267350 2147483647+ ee EFI GPT
Partition 1 has different physical/logical beginnings (non-Linux?):
phys=(0, 0, 1) logical=(0, 0, 2)
Partition 1 has different physical/logical endings:
phys=(1023, 254, 63) logical=(267349, 89, 4)

Disk /dev/sdx: 515 MB, 515899392 bytes
8 heads, 32 sectors/track, 3936 cylinders
Units = cylinders of 256 * 512 = 131072 bytes

Device Boot Start End Blocks Id System
/dev/sdx1 1 17 2160 83 Linux
/dev/sdx2 18 1910 242304 83 Linux
/dev/sdx3 1911 3803 242304 83 Linux
/dev/sdx4 3804 3936 17024 5 Extended
/dev/sdx5 3804 3868 8304 83 Linux
/dev/sdx6 3869 3936 8688 83 Linux

Disk /dev/md9: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md9 doesn’t contain a valid partition table

Disk /dev/md4: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md4 doesn’t contain a valid partition table

Disk /dev/md0: 0 MB, 0 bytes
2 heads, 4 sectors/track, 0 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn’t contain a valid partition table
You must set cylinders.
You can do this from the extra functions menu.

Disk /dev/sda: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee EFI GPT
Partition 1 has different physical/logical beginnings (non-Linux?):
phys=(0, 0, 1) logical=(0, 0, 2)
Partition 1 has different physical/logical endings:
phys=(1023, 254, 63) logical=(267349, 89, 4)

Disk /dev/sda4: 469 MB, 469893120 bytes
2 heads, 4 sectors/track, 114720 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/sda4 doesn’t contain a valid partition table
[~] #

[~] # mdadm -E /dev/sda3
/dev/sda3:
Magic : a92b4efc
Version : 00.90.00
UUID : 9d56f3ad:0f6c7547:e7feb5d1:0092c6c8
Creation Time : Wed Dec 22 12:34:02 2010
Raid Level : raid5
Used Dev Size : 2928697600 (2793.02 GiB 2998.99 GB)
Array Size : 8786092800 (8379.07 GiB 8996.96 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0

Update Time : Wed Jun 5 23:07:18 2013
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Checksum : 8bd3aa41 – correct
Events : 0.17507

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 4 8 3 4 spare /dev/sda3

0 0 0 0 0 removed
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 3 4 spare /dev/sda3
[~] #

[~] # mdadm -E /dev/sdb3
/dev/sdb3:
Magic : a92b4efc
Version : 00.90.00
UUID : 9d56f3ad:0f6c7547:e7feb5d1:0092c6c8
Creation Time : Wed Dec 22 12:34:02 2010
Raid Level : raid5
Used Dev Size : 2928697600 (2793.02 GiB 2998.99 GB)
Array Size : 8786092800 (8379.07 GiB 8996.96 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0

Update Time : Wed Jun 5 23:18:18 2013
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Checksum : 8bd3aed1 – correct
Events : 0.17753

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 1 8 19 1 active sync /dev/sdb3

0 0 0 0 0 removed
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 3 4 spare /dev/sda3
[~] #

[~] # mdadm -E /dev/sdc3
/dev/sdc3:
Magic : a92b4efc
Version : 00.90.00
UUID : 9d56f3ad:0f6c7547:e7feb5d1:0092c6c8
Creation Time : Wed Dec 22 12:34:02 2010
Raid Level : raid5
Used Dev Size : 2928697600 (2793.02 GiB 2998.99 GB)
Array Size : 8786092800 (8379.07 GiB 8996.96 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0

Update Time : Wed Jun 5 23:21:59 2013
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Checksum : 8bd3b060 – correct
Events : 0.17833

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 2 8 35 2 active sync /dev/sdc3

0 0 0 0 0 removed
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 3 4 spare /dev/sda3
[~] #

[~] # mdadm -E /dev/sdd3
/dev/sdd3:
Magic : a92b4efc
Version : 00.90.00
UUID : 9d56f3ad:0f6c7547:e7feb5d1:0092c6c8
Creation Time : Wed Dec 22 12:34:02 2010
Raid Level : raid5
Used Dev Size : 2928697600 (2793.02 GiB 2998.99 GB)
Array Size : 8786092800 (8379.07 GiB 8996.96 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0

Update Time : Wed Jun 5 23:25:08 2013
State : active
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Checksum : 8bd36bc9 – correct
Events : 0.17901

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 3 8 51 3 active sync /dev/sdd3

0 0 0 0 0 removed
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 3 4 spare /dev/sda3
[~] #

[~] # mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Wed Dec 22 12:34:02 2010
Raid Level : raid5
Array Size : 8786092800 (8379.07 GiB 8996.96 GB)
Used Dev Size : 2928697600 (2793.02 GiB 2998.99 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Thu Jun 6 00:51:25 2013
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 64K

Rebuild Status : 36% complete

UUID : 9d56f3ad:0f6c7547:e7feb5d1:0092c6c8
Events : 0.20114

Number Major Minor RaidDevice State
4 8 3 0 spare rebuilding /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
[~] #

[~] # cat /etc/raidtab
raiddev /dev/md0
raid-level 5
nr-raid-disks 4
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
device /dev/sdc3
raid-disk 2
device /dev/sdd3
raid-disk 3
[~] #

Funcionando con raid 5 de 8 discos

[~] # more /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
10244987200 blocks level 5, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]

md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdc4[2] sdb4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sda1[0] sdh1[7] sde1[6] sdd1[5] sdg1[4] sdf1[3] sdc1[2] sdb1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 0/65 pages [0KB], 4KB chunk

unused devices:
[~] #

[~] # fdisk -l

Disk /dev/sde: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sde1 1 66 530125 83 Linux
/dev/sde2 67 132 530142 83 Linux
/dev/sde3 133 243138 1951945693 83 Linux
/dev/sde4 243139 243200 498012 83 Linux

Disk /dev/sdf: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdf1 1 66 530125 83 Linux
/dev/sdf2 67 132 530142 83 Linux
/dev/sdf3 133 243138 1951945693 83 Linux
/dev/sdf4 243139 243200 498012 83 Linux

Disk /dev/sdg: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdg1 1 66 530125 83 Linux
/dev/sdg2 67 132 530142 83 Linux
/dev/sdg3 133 243138 1951945693 83 Linux
/dev/sdg4 243139 243200 498012 83 Linux

Disk /dev/sdh: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdh1 1 66 530125 83 Linux
/dev/sdh2 67 132 530142 83 Linux
/dev/sdh3 133 243138 1951945693 83 Linux
/dev/sdh4 243139 243200 498012 83 Linux

Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 66 530125 83 Linux
/dev/sdc2 67 132 530142 83 Linux
/dev/sdc3 133 243138 1951945693 83 Linux
/dev/sdc4 243139 243200 498012 83 Linux

Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 66 530125 83 Linux
/dev/sdb2 67 132 530142 83 Linux
/dev/sdb3 133 182338 1463569693 83 Linux
/dev/sdb4 182339 182400 498012 83 Linux

Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 66 530125 83 Linux
/dev/sdd2 67 132 530142 83 Linux
/dev/sdd3 133 243138 1951945693 83 Linux
/dev/sdd4 243139 243200 498012 83 Linux

Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 66 530125 83 Linux
/dev/sda2 67 132 530142 83 Linux
/dev/sda3 133 182338 1463569693 83 Linux
/dev/sda4 182339 182400 498012 83 Linux

Disk /dev/sda4: 469 MB, 469893120 bytes
2 heads, 4 sectors/track, 114720 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/sda4 doesn’t contain a valid partition table

Disk /dev/sdx: 515 MB, 515899392 bytes
8 heads, 32 sectors/track, 3936 cylinders
Units = cylinders of 256 * 512 = 131072 bytes

Device Boot Start End Blocks Id System
/dev/sdx1 1 17 2160 83 Linux
/dev/sdx2 18 1910 242304 83 Linux
/dev/sdx3 1911 3803 242304 83 Linux
/dev/sdx4 3804 3936 17024 5 Extended
/dev/sdx5 3804 3868 8304 83 Linux
/dev/sdx6 3869 3936 8688 83 Linux

Disk /dev/md9: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md9 doesn’t contain a valid partition table

Disk /dev/md8: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md8 doesn’t contain a valid partition table

Disk /dev/md0: 10490.8 GB, 10490866892800 bytes
2 heads, 4 sectors/track, -1733720496 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn’t contain a valid partition table
[~] #

[~] # mdadm -E /dev/sda3
/dev/sda3:
Magic : a92b4efc
Version : 00.90.00
UUID : 478da556:4bba431a:8e29dce8:fdee62fd
Creation Time : Wed Mar 9 14:34:57 2011
Raid Level : raid5
Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB)
Array Size : 10244987200 (9770.38 GiB 10490.87 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:09:13 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : c1bb0e0b – correct
Events : 0.23432693

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 0 8 3 0 active sync /dev/sda3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -E /dev/sdb3
/dev/sdb3:
Magic : a92b4efc
Version : 00.90.00
UUID : 478da556:4bba431a:8e29dce8:fdee62fd
Creation Time : Wed Mar 9 14:34:57 2011
Raid Level : raid5
Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB)
Array Size : 10244987200 (9770.38 GiB 10490.87 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:19:22 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : c1bb107e – correct
Events : 0.23432693

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 1 8 19 1 active sync /dev/sdb3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -E /dev/sdc3
/dev/sdc3:
Magic : a92b4efc
Version : 00.90.00
UUID : 478da556:4bba431a:8e29dce8:fdee62fd
Creation Time : Wed Mar 9 14:34:57 2011
Raid Level : raid5
Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB)
Array Size : 10244987200 (9770.38 GiB 10490.87 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:22:38 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : c1bb1154 – correct
Events : 0.23432693

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 2 8 35 2 active sync /dev/sdc3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -E /dev/sdd3
/dev/sdd3:
Magic : a92b4efc
Version : 00.90.00
UUID : 478da556:4bba431a:8e29dce8:fdee62fd
Creation Time : Wed Mar 9 14:34:57 2011
Raid Level : raid5
Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB)
Array Size : 10244987200 (9770.38 GiB 10490.87 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:25:51 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : c1bb1227 – correct
Events : 0.23432693

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 3 8 51 3 active sync /dev/sdd3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -E /dev/sde3
/dev/sde3:
Magic : a92b4efc
Version : 00.90.00
UUID : 478da556:4bba431a:8e29dce8:fdee62fd
Creation Time : Wed Mar 9 14:34:57 2011
Raid Level : raid5
Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB)
Array Size : 10244987200 (9770.38 GiB 10490.87 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:30:42 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : c1bb135c – correct
Events : 0.23432693

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 4 8 67 4 active sync /dev/sde3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -E /dev/sdf3
/dev/sdf3:
Magic : a92b4efc
Version : 00.90.00
UUID : 478da556:4bba431a:8e29dce8:fdee62fd
Creation Time : Wed Mar 9 14:34:57 2011
Raid Level : raid5
Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB)
Array Size : 10244987200 (9770.38 GiB 10490.87 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:31:09 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : c1bb1389 – correct
Events : 0.23432693

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 5 8 83 5 active sync /dev/sdf3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

~] # mdadm -E /dev/sdg3
/dev/sdg3:
Magic : a92b4efc
Version : 00.90.00
UUID : 478da556:4bba431a:8e29dce8:fdee62fd
Creation Time : Wed Mar 9 14:34:57 2011
Raid Level : raid5
Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB)
Array Size : 10244987200 (9770.38 GiB 10490.87 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:31:34 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : c1bb13b4 – correct
Events : 0.23432693

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 6 8 99 6 active sync /dev/sdg3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -E /dev/sdh3
/dev/sdh3:
Magic : a92b4efc
Version : 00.90.00
UUID : 478da556:4bba431a:8e29dce8:fdee62fd
Creation Time : Wed Mar 9 14:34:57 2011
Raid Level : raid5
Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB)
Array Size : 10244987200 (9770.38 GiB 10490.87 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:31:58 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : c1bb13de – correct
Events : 0.23432693

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 7 8 115 7 active sync /dev/sdh3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

~] # mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Wed Mar 9 14:34:57 2011
Raid Level : raid5
Array Size : 10244987200 (9770.38 GiB 10490.87 GB)
Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Thu Jun 6 00:52:46 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

UUID : 478da556:4bba431a:8e29dce8:fdee62fd
Events : 0.23432693

Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3
5 8 83 5 active sync /dev/sdf3
6 8 99 6 active sync /dev/sdg3
7 8 115 7 active sync /dev/sdh3
[~] #

[~] # cat /etc/raidtab
raiddev /dev/md0
raid-level 5
nr-raid-disks 8
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
device /dev/sdc3
raid-disk 2
device /dev/sdd3
raid-disk 3
device /dev/sde3
raid-disk 4
device /dev/sdf3
raid-disk 5
device /dev/sdg3
raid-disk 6
device /dev/sdh3
raid-disk 7
[~] #

Funcionando con raid 6 de 8 discos

[~] # more /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid6 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
11711673600 blocks level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]

md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdc4[2] sdb4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sda1[0] sdh1[7] sdf1[6] sdg1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 3/65 pages [12KB], 4KB chunk

unused devices:
[~] #

[~] # fdisk -l

Disk /dev/sde: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sde1 1 66 530125 83 Linux
/dev/sde2 67 132 530142 83 Linux
/dev/sde3 133 243138 1951945693 83 Linux
/dev/sde4 243139 243200 498012 83 Linux

Disk /dev/sdf: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdf1 1 66 530125 83 Linux
/dev/sdf2 67 132 530142 83 Linux
/dev/sdf3 133 243138 1951945693 83 Linux
/dev/sdf4 243139 243200 498012 83 Linux

Disk /dev/sdg: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdg1 1 66 530125 83 Linux
/dev/sdg2 67 132 530142 83 Linux
/dev/sdg3 133 243138 1951945693 83 Linux
/dev/sdg4 243139 243200 498012 83 Linux

Disk /dev/sdh: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdh1 1 66 530125 83 Linux
/dev/sdh2 67 132 530142 83 Linux
/dev/sdh3 133 243138 1951945693 83 Linux
/dev/sdh4 243139 243200 498012 83 Linux

Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 66 530125 83 Linux
/dev/sdd2 67 132 530142 83 Linux
/dev/sdd3 133 243138 1951945693 83 Linux
/dev/sdd4 243139 243200 498012 83 Linux

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 66 530125 83 Linux
/dev/sdb2 67 132 530142 83 Linux
/dev/sdb3 133 243138 1951945693 83 Linux
/dev/sdb4 243139 243200 498012 83 Linux

Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 66 530125 83 Linux
/dev/sdc2 67 132 530142 83 Linux
/dev/sdc3 133 243138 1951945693 83 Linux
/dev/sdc4 243139 243200 498012 83 Linux

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 66 530125 83 Linux
/dev/sda2 67 132 530142 83 Linux
/dev/sda3 133 243138 1951945693 83 Linux
/dev/sda4 243139 243200 498012 83 Linux

Disk /dev/sda4: 469 MB, 469893120 bytes
2 heads, 4 sectors/track, 114720 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/sda4 doesn’t contain a valid partition table

Disk /dev/sdx: 515 MB, 515899392 bytes
8 heads, 32 sectors/track, 3936 cylinders
Units = cylinders of 256 * 512 = 131072 bytes

Device Boot Start End Blocks Id System
/dev/sdx1 1 17 2160 83 Linux
/dev/sdx2 18 1910 242304 83 Linux
/dev/sdx3 1911 3803 242304 83 Linux
/dev/sdx4 3804 3936 17024 5 Extended
/dev/sdx5 3804 3868 8304 83 Linux
/dev/sdx6 3869 3936 8688 83 Linux

Disk /dev/md9: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md9 doesn’t contain a valid partition table

Disk /dev/md8: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md8 doesn’t contain a valid partition table

Disk /dev/md0: 11992.7 GB, 11992753766400 bytes
2 heads, 4 sectors/track, -1367048896 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn’t contain a valid partition table
[~] #

[~] # mdadm -E /dev/sda3
/dev/sda3:
Magic : a92b4efc
Version : 00.90.00
UUID : fd2c41b5:17649ac4:1e10b251:f1136e44
Creation Time : Wed Dec 12 04:34:21 2012
Raid Level : raid6
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 11711673600 (11169.12 GiB 11992.75 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:10:11 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : e424ef27 – correct
Events : 0.3789835

Chunk Size : 64K

Number Major Minor RaidDevice State
this 0 8 3 0 active sync /dev/sda3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -E /dev/sdb3
/dev/sdb3:
Magic : a92b4efc
Version : 00.90.00
UUID : fd2c41b5:17649ac4:1e10b251:f1136e44
Creation Time : Wed Dec 12 04:34:21 2012
Raid Level : raid6
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 11711673600 (11169.12 GiB 11992.75 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:16:04 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : e424f09a – correct
Events : 0.3789835

Chunk Size : 64K

Number Major Minor RaidDevice State
this 1 8 19 1 active sync /dev/sdb3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -E /dev/sdc3
/dev/sdc3:
Magic : a92b4efc
Version : 00.90.00
UUID : fd2c41b5:17649ac4:1e10b251:f1136e44
Creation Time : Wed Dec 12 04:34:21 2012
Raid Level : raid6
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 11711673600 (11169.12 GiB 11992.75 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:23:14 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : e424f25a – correct
Events : 0.3789835

Chunk Size : 64K

Number Major Minor RaidDevice State
this 2 8 35 2 active sync /dev/sdc3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -E /dev/sdd3
/dev/sdd3:
Magic : a92b4efc
Version : 00.90.00
UUID : fd2c41b5:17649ac4:1e10b251:f1136e44
Creation Time : Wed Dec 12 04:34:21 2012
Raid Level : raid6
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 11711673600 (11169.12 GiB 11992.75 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:26:29 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : e424f32f – correct
Events : 0.3789835

Chunk Size : 64K

Number Major Minor RaidDevice State
this 3 8 51 3 active sync /dev/sdd3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -E /dev/sde3
/dev/sde3:
Magic : a92b4efc
Version : 00.90.00
UUID : fd2c41b5:17649ac4:1e10b251:f1136e44
Creation Time : Wed Dec 12 04:34:21 2012
Raid Level : raid6
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 11711673600 (11169.12 GiB 11992.75 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:33:04 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : e424f4cc – correct
Events : 0.3789835

Chunk Size : 64K

Number Major Minor RaidDevice State
this 4 8 67 4 active sync /dev/sde3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -E /dev/sdf3
/dev/sdf3:
Magic : a92b4efc
Version : 00.90.00
UUID : fd2c41b5:17649ac4:1e10b251:f1136e44
Creation Time : Wed Dec 12 04:34:21 2012
Raid Level : raid6
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 11711673600 (11169.12 GiB 11992.75 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:33:26 2013
State : active
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : e3eb20e9 – correct
Events : 0.3789836

Chunk Size : 64K

Number Major Minor RaidDevice State
this 5 8 83 5 active sync /dev/sdf3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -E /dev/sdg3
/dev/sdg3:
Magic : a92b4efc
Version : 00.90.00
UUID : fd2c41b5:17649ac4:1e10b251:f1136e44
Creation Time : Wed Dec 12 04:34:21 2012
Raid Level : raid6
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 11711673600 (11169.12 GiB 11992.75 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:33:47 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : e424f51b – correct
Events : 0.3789835

Chunk Size : 64K

Number Major Minor RaidDevice State
this 6 8 99 6 active sync /dev/sdg3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -E /dev/sdh3
/dev/sdh3:
Magic : a92b4efc
Version : 00.90.00
UUID : fd2c41b5:17649ac4:1e10b251:f1136e44
Creation Time : Wed Dec 12 04:34:21 2012
Raid Level : raid6
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 11711673600 (11169.12 GiB 11992.75 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0

Update Time : Wed Jun 5 23:34:05 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : e424f53f – correct
Events : 0.3789835

Chunk Size : 64K

Number Major Minor RaidDevice State
this 7 8 115 7 active sync /dev/sdh3

0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
4 4 8 67 4 active sync /dev/sde3
5 5 8 83 5 active sync /dev/sdf3
6 6 8 99 6 active sync /dev/sdg3
7 7 8 115 7 active sync /dev/sdh3
[~] #

[~] # mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Wed Dec 12 04:34:21 2012
Raid Level : raid6
Array Size : 11711673600 (11169.12 GiB 11992.75 GB)
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Thu Jun 6 00:56:45 2013
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0

Chunk Size : 64K

UUID : fd2c41b5:17649ac4:1e10b251:f1136e44
Events : 0.3789835

Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3
5 8 83 5 active sync /dev/sdf3
6 8 99 6 active sync /dev/sdg3
7 8 115 7 active sync /dev/sdh3
[~] #

[~] # cat /etc/raidtab
raiddev /dev/md0
raid-level 6
nr-raid-disks 8
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
device /dev/sdc3
raid-disk 2
device /dev/sdd3
raid-disk 3
device /dev/sde3
raid-disk 4
device /dev/sdf3
raid-disk 5
device /dev/sdg3
raid-disk 6
device /dev/sdh3
raid-disk 7
[~] #

3
Jun

QNAP RAID Corruption / RAID System Errors

I – Introduction

II– How to Fix if RAID seems “In Degreed”

III– How to Fix if RAID seems “Unmounted”

IV – How to Fix if RAID seems Not Active (New way to fix!)

V – RAID HDD order seems wrong just like “RAID 5 – Drives : 2 4 3″and device seems Not Active

VI – How to Fix if RAID seems as “Single Disk”

VII – User Remove the RAID Volume

VIII – How to Fix if 2 HDD gives error on RAID 5, 3 HDD gives error on RAID 6

IX – Raid fail – HDDs have no partitions;

X – RAID fail – Partitions have no md superblock

XI – No md0 for array

XII – NAS fail –MountHDD(s)with another QNAP NAS

I – Introduction;

Warning : This documents are recomended for Professional users only. If you dont know what you’r doing, you may damage your RAID which cause loosing data. Qnapsupport Taiwan works great to solve this kind of RAID corruptions easly, and My advice is directly contact with them at this kind of cases.

You can download QNAP NAS Data Recovery Document Down Below:

QNAP_NAS_Data_Recovery

NAS is OK but cannot access data

raidtab is broken or missing

Check raid settings and configure right raidtab

HDD have no partitions

Use parted to recreate the partitions

•Partitions have no MD superblock

mdadm -CfR –assume-clean

•RAID array can’t be assembled or status is inactive

check above and make sure every disks on raid exist

•RAID array can’t been mounted

e2fsck, e2fsck -b

•Able to mount RAID but data is disappear

umount and e2fsck, if not work, try data recovery

•RAID is in degraded, read-only

backup the data then mdadm -CfR, it not work, recreate the RAID

NAS fail

MountHDD(s)with another QNAP NAS (System Migration)

MountHDD(s)with PC ( R-Studio/ ext3/4 reader) (3rd party tool )

Data are deleted by user/administrator accidentally

data recovery company, photorec, r-studio/r-linux

Introduction of mdadm command

#mdadm -E /dev/sda3 > that will tell if it is md disk

#mdadm -Af /dev/md0 /dev/sd[a-d]3 > that will get available md disk into raid array

———————————–

#mdadm -CfR -l5 -n8 –assume-clean /dev/md0 /dev/sd[a-h]3 > that will overwrite the mdstat on each disk

> -CfR force to create the raid array

> -l5 = raid 5array

> -n8 = available md disk

> –assume-clean without data partition syncing

Introduction of Two Scripts;

# config_util

Usage: config_util input

input=0: Check if any HD existed.

input=1: Mirror ROOT partition.

input=2: Mirror Swap Space (not yet).

input=4: Mirror RFS_EXT partition.

>> usually we have config_util 1 to get the md9 ready

# storage_boot_init

Usage: storage_boot_init phase

phase=1:mountROOTpartition.

phase=2: mount DATA partition, create storage.conf and refresh disk.

phase=3: Create_Disk_Storage_Conf.

>> usually we have storage_boot_init 1 to mount the md9

II -How to Fix if RAID seems “In Degreed”

If your RAID system seems as down below, use this document. If not, Please dont try anything in this document:

A – Qnap FAQ Advice;

Login to Qnap. Disk Managment ->Volume managment. One of your HDD should give “Read/write” or “Normal” error, or Qnap doesnt Recognize there is a HDD on on slot.

Just plug out Broken HDD, wait over 20 seconds, and plug in new HDD. If more than one HDD seems broken, dont change 2 HDD at the same time. Wait Qnap finish to Synronize first HDD, and after it completes, change other broken HDD.

If you loose more HDD than RAID HDD lost tolarance, Backup your datas quickly to another Qnap or External HDD.

Note : New HDD must be same size with your other HDDs. Qnap doesnt accept lower size new HDD. Also I dont advice to use Higher size HDD at this kind of cases. You can use another brand of HDD, or differen sata speed HDDs.

Also, If your HDD seems doesnt plugged in HDD port even if you change it with a new one, it may be an Hardware problem about Qnap sata cable or Mainboard. Send device for Repair to vendor, or open device and plug out sata cable from mainboard, then plug it back.

B – Qnap RAID Recovery Document;

RAID fail – RAID is degraded, read-only

•When degraded, read-only status, there is more disk failure than the raid can support, need to help the user to check which disks are faulty if Web UI isn’t helpful

– Check klog or dmesg to find the faulty disks

•Ask user to backup the data first

•If disks looks OK, after backup, try “mdadm -CfR –assume-clean” to recreate the RAID

•If above doesn’t work, recreate the RAID

C – My Advice;

If your system seems In Degraded, Failed Drive X, you probably loose more HDD thatn RAID tolerated, so Take your Backup, and Re-Install Qnap From Begining.

III – How to Fix if RAID Becomes “Unmounted”

If your RAID system seems as down below, fallow this document. If not, Please dont try anything in this document:

IF YOU HAVE CRİTİCAL DATA ON QNAP, PLEASE CONTACT WİTH QNAP TAIWAN SUPPORT

A – Qnap FAQ Solution;

Q : My NAS lost all its settings, and all HDDs are shown as unmounted.

A : In case of corrupt/lost config:

1. Power off the NAS. Remove the HDD(s)

2. Power on the NAS

3. After a short beep and a long beep, plug the HDD back into the NAS

4. Run QNAP Finder, it will find the NAS, do NOT configure it!

5. Connect to the NAS by telnet port 13131 (e.g. with Putty)

6. Run the MFA Degree following commands to recover with default config

Use the following commands if using 1 drive (if you have more than 1 HDD, please skip this document)

#mount /dev/sda1 /mnt

# cd /mnt/.config/

# cp /etc/default_config/uLinux.conf /mnt/.config/

# reboot

Use the following command if using 2 drives (not tested) (if you have more than 2 HDD, please skip this document)

# mdadm -A /dev/md9 /dev/sda1 /dev/sdb1

# mount /dev/md9 /mnt

# cd /mnt/.config/

# cp /etc/default_config/uLinux.conf /mnt/.config/

# reboot

8. Above procedure will reset the configuration back to default and then you need to reconfigure it. But all the share should be available now.

Please remember NOT to re-initialize the HDD. Since this will format your HDD and all your data will be lost.

9. To be prepared next time this happens, always make sure you have a working backup of your personal uLinux.conf!

Note: uLinux.conf is the main settings configuration

Taken From : Qnap FAQ

If you have 4 or more HDD, fallow this document. Dont start Qnap wihtout HDDs just like first 2 documents:

RAID fail – Cann’t be mounted, status unmount

(from Offical Qnap RAID recovery document)

1. Make sure the raid status is active (more /proc/mdstat)

2. try manually mount

# mount /dev/md0 /share/MD0_DATA -t ext3

# mount /dev/md0 /share/MD0_DATA -t ext4

# mount /dev/md0 /share/MD0_DATA -o ro (read only)

3. use e2fsck / e2fsck_64 to check

# e2fsck -ay /dev/md0 (auto and continue with yes)

4. If there are many errors when check, memory may not enough, need to create more swap space;

Use the following command to create more swap space

[~] # more /proc/mdstat

…….

md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]

530048 blocks [2/2] [UU]

……….

[~] # swapoff /dev/md8

[~] # mdadm -S /dev/md8

mdadm: stopped /dev/md8

[~] # mkswap /dev/sda2

Setting up swapspace version 1, size = 542859 kB

no label, UUID=7194e0a9-be7a-43ac-829f-fd2d55e07d62

[~] # mkswap /dev/sdb2

Setting up swapspace version 1, size = 542859 kB

no label, UUID=0af8fcdd-8ed1-4fca-8f53-0349d86f9474

[~] # mkswap /dev/sdc2

Setting up swapspace version 1, size = 542859 kB

no label, UUID=f40bd836-3798-4c71-b8ff-9c1e9fbff6bf

[~] # mkswap /dev/sdd2

Setting up swapspace version 1, size = 542859 kB

no label, UUID=4dad1835-8d88-4cf1-a851-d80a87706fea

[~] # swapon /dev/sda2

[~] # swapon /dev/sdb2

[~] # swapon /dev/sdc2

[~] # swapon /dev/sdd2

[~] # e2fsck_64 -fy /dev/md0

If there is no file system superblock or the check fail, you can try backup superblcok.

1. Use the following command to find backup superblock location

# /usr/local/sbin/dumpe2fs /dev/md0 | grep superblock

Sample output:

Primary superblock at 0, Group descriptors at 1-6

Backup superblock at 32768, Group descriptors at 32769-32774

Backup superblock at 98304, Group descriptors at 98305-98310

..163840…229376…294912…819200…884736…1605632…2654208…4096000… 7962624… 11239424… 20480000…

23887872…71663616…78675968..102400000..214990848..512000000…550731776…644972544

2. Now check and repair a Linux file system using alternate superblock # 32768:

# e2fsck -b 32768 /dev/md0

Sample output:

fsck 1.40.2 (12-Jul-2007)

e2fsck 1.40.2 (12-Jul-2007)

/dev/sda2 was not cleanly unmounted, check forced.

Pass 1: Checking inodes, blocks, and sizescf

…….

Free blocks count wrong for group #241 (32254, counted=32253).

Fix? yes

………

/dev/sda2: ***** FILE SYSTEM WAS MODIFIED *****

/dev/sda2: 59586/30539776 files (0.6% non-contiguous), 3604682/61059048 blocks

3. Now try to mount file system using mount command:

# mount /dev/md0 /share/MD0_DATA -t ext4

RAID fail – able to mount but data disappear

•If the mount is OK, but data is disappear, unmount the RAID and run e2fsck again (can try backup superblock)

•If still fail, try data recovery program (photorec, R-Studio) or contact data recovery company

IV – How to Fix if RAID seems Not Active

If your RAID system seems as the picture down below, fallow this document. If not, Please dont try anything in this document:

Update your Qnap fimware with Qnapfinder 3.7.2 or higher firmware. Just go to Disk Managment -> RAID managment. Choose your RAID and press “Recover” to fix.

If this doesnt work and “Recover” button is still avaible, just fullow these steps;

While device is still working, Plug out HDD that you suspect which maybe broken, and press Recover button again.

Plug out 1.st HDD, Press Recover. If doesnt work, Plug in HDD again, and try same steps for 2.th HDD.

IF THIS DOESTN WORK, PLUG IN THESE HDD BACK AGAİN, AND PRESS RECOVER.

Now, Plug out another HDD and press “Recover” Button again.

IF THIS DOESTN WORK, PLUG IN THESE HDD BACK AGAİN, AND PRESS RECOVER.

I was able to fix my 2 costumers RAID system by whis way, without typing any linux commands.

But must warn you again, best choice is Requesting help from Qnap Taiwan Support Team.

Ofcourse, this may doesnt work. here is another case how I fix;

First, I try to fix “Recovery” method, but doesnt work. At Qnap RAID managment menu, I check All HDDs, But all of them seems good. So I Login with Putty, and type these commands;

mdadm -E /dev/sda3

mdadm -E /dev/sdb3

mdadm -E /dev/sdc3

mdadm -E /dev/sdd3

Except first HDD, other 3 HDD’s doesn have md superblock; Also I try “config_util 1” & “storage_boot_init 2” commands, but both of them gives error;

Costumer got RAID 5 (-l 5) with 4 HDD (-n 4), so ı type this command;

# mdadm -CfR –assume-clean /dev/md0 -l 5 -n 4 /dev/sda3 /dev/sdb3 /dev/sdc3

/dev/sdd3

then mount with this command;

# mount /dev/md0 /share/MD0_DATA -t ext4

And works perfect.

Also here is Putty Steps;

login as: admin

admin@192.168.101.16′s password:

[~] # mdadm -E /dev/sda3

/dev/sda3:

Magic : a92b4efc

Version : 00.90.00

UUID : 2d2ee77d:045a6e0f:438d81dd:575c1ff3

Creation Time : Wed Jun 6 20:11:14 2012

Raid Level : raid5

Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)

Array Size : 5855836800 (5584.56 GiB 5996.38 GB)

Raid Devices : 4

Total Devices : 4

Preferred Minor : 0

Update Time : Fri Jan 11 10:24:40 2013

State : clean

Active Devices : 4

Working Devices : 4

Failed Devices : 0

Spare Devices : 0

Checksum : 8b330731 – correct

Events : 0.4065365

Layout : left-symmetric

Chunk Size : 64K

Number Major Minor RaidDevice State

this 0 8 3 0 active sync /dev/sda3

0 0 8 3 0 active sync /dev/sda3

1 1 8 19 1 active sync /dev/sdb3

2 2 8 35 2 active sync /dev/sdc3

3 3 8 51 3 active sync /dev/sdd3

[~] # mdadm -E /dev/sdb3

mdadm: No md superblock detected on /dev/sdb3.

[~] # mdadm -CfR –assume-clean /dev/md0 -l 5 -n 4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3

mdadm: /dev/sda3 appears to contain an ext2fs file system

size=1560869504K mtime=Fri Jan 11 10:22:54 2013

mdadm: /dev/sda3 appears to be part of a raid array:

level=raid5 devices=4 ctime=Wed Jun 6 20:11:14 2012

mdadm: /dev/sdd3 appears to contain an ext2fs file system

size=1292434048K mtime=Fri Jan 11 10:22:54 2013

mdadm: array /dev/md0 started.

[~] # more /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]

md0 : active raid5 sdd3[3] sdc3[2] sdb3[1] sda3[0]

5855836800 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md4 : active raid1 sda2[2](S) sdd2[0] sdc2[3](S) sdb2[1]

530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdc4[3] sdd4[2] sdb4[1]

458880 blocks [4/4] [UUUU]

bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sda1[0] sdc1[3] sdd1[2] sdb1[1]

530048 blocks [4/4] [UUUU]

bitmap: 1/65 pages [4KB], 4KB chunk

unused devices:

[~] # mount /dev/md0 /share/MD0_DATA -t ext4

[~] #

Here is Result;

Qnap Taiwan Advice:

IF YOU HAVE CRİTİCAL DATA ON QNAP, PLEASE CONTACT WİTH QNAP TAIWAN SUPPORT

RAID fail – RAID can’t be assembled or status is inactive:

1.Check partitions, md superblock status

2.Check if there is any RAID disk missing / faulty

3. Use “mdadm -CfR –assume-clean” to recreate the RAID

V – RAID HDD order seems wrong just like “RAID 5 – Drives : 2 4 3″and device seems Not Active

If your RAID order seems like this:

First try RAID recovery. İf its still failes:

Follow this document:

Download Winscp and Login to your Qnap. Go to etc -> raidtab and first take bakup of this file!

Then double click on this file. At this table, sda means your first HDD, sdb is your second and sdc means your 3.th HDD, and their order seems wrong.

Right table should be like down below so modify RAID like this:

RAID-5

raiddev /dev/md0

raid-level 5

nr-raid-disks 3

nr-spare-disks 0

chunk-size 4

persistent-superblock 1

device /dev/sda3

raid-disk 0

device /dev/sdb3

raid-disk 1

device /dev/sdc3

raid-disk 2

At this case I have 4 HDD, so modify nr-raid-disks 4 and also add this line :

device /dev/sdd3

raid-disk 3

It should be look like this:

Now, save this file, and restart your Qnap.

VI – “How to Fix” if all of your HDDs seems as “Single Disk” even you have a RAID structure or accedently RAID Removed;

I Highly Recomanded you to contact with Qnap SupportTaiwan, but I you know what you are doing, here is how to fix document;

RAID Issue – raidtab is broken

•raidtab is used to check if the disk is in RAID group or single and show the RAID information on web UI.

•If the disk is in RAID but Web UI show it is single, or the RAID information is different to the actual disk RAID data ( checked by mdadm -E), then the raidtab should be corrupt. Then you need to manually edit the raidtab file to comply the actual RAID status.

•Check the following slides for raidtab contents

Single

No raidtab

RAID 0 Stripping

raiddev /dev/md0

raid-level 0

nr-raid-disks 2

nr-spare-disks 0

chunk-size 4

persistent-superblock 1

device /dev/sda3

raid-disk 0

device /dev/sdb3

raid-disk 1

RAID-1 Mirror

raiddev /dev/md0

raid-level 1

nr-raid-disks 2

nr-spare-disks 0

chunk-size 4

persistent-superblock 1

device /dev/sda3

raid-disk 0

device /dev/sdb3

raid-disk 1

JBOD Linear

raiddev /dev/md0

raid-level linear

nr-raid-disks 3

nr-spare-disks 0

chunk-size 4

persistent-superblock 1

device /dev/sda3

raid-disk 0

device /dev/sdb3

raid-disk 1

device /dev/sdc3

raid-disk 2

RAID-5

raiddev /dev/md0

raid-level 5

nr-raid-disks 3

nr-spare-disks 0

chunk-size 4

persistent-superblock 1

device /dev/sda3

raid-disk 0

device /dev/sdb3

raid-disk 1

device /dev/sdc3

raid-disk 2

RAID-5 + Hot spare

raiddev /dev/md0

raid-level 5

nr-raid-disks 3

nr-spare-disks 1

chunk-size 4

persistent-superblock 1

device /dev/sda3

raid-disk 0

device /dev/sdb3

raid-disk 1

device /dev/sdc3

raid-disk 2

device /dev/sdd3

spare-disk 0

RAID-5 + Global Spare

raidtab is same as RAID-5

On uLinux.conf, add a line if global spare disk is disk 4:

[Storage]

GLOBAL_SPARE_DRIVE_4 = TRUE

RAID-6

raiddev /dev/md0

raid-level 6

nr-raid-disks 4

nr-spare-disks 0

chunk-size 4

persistent-superblock 1

device /dev/sda3

raid-disk 0

device /dev/sdb3

raid-disk 1

device /dev/sdc3

raid-disk 2

device /dev/sdd3

raid-disk 3

RAID-10

raiddev /dev/md0

raid-level 10

nr-raid-disks 4

nr-spare-disks 0

chunk-size 4

persistent-superblock 1

device /dev/sda3

raid-disk 0

device /dev/sdb3

raid-disk 1

device /dev/sdc3

raid-disk 2

device /dev/sdd3

raid-disk 3

VII – User Remove the RAID Volume

# more /proc/mdstat

**Check if the RAID is really removed

# mdadm -E /dev/sda3

** Check if the MD superblock is really removed

# mdadm -CfR –assume-clean /dev/md0 -l 5 -n 3 /dev/sda3 /dev/sdb3 /dev/sdc3

**Create the RAID, assume it is 3 HDDs raid-5

# e2fsck -y /dev/md0

**check file system, Assume “yes” to all questions. If 64-bit, e2fsck_64

# mount /dev/md0 /share/MD0_DATA -t ext4

** mount the RAID back

# vi raidtab

** manually create the raid table

# reboot

** Need to add the removed network share(s) back after reboot

VIII – How to Fix if 2 HDD gives error on RAID 5, 3 HDD gives error on RAID 6

If you cant reach your datas on Qnap, Plug HDDs to another Qnap (I save my 2 costumer all of datas by this way before)

If you can reach your datas, Quickly Backup your datas to another Qnap or External Drive. After it completes, Install Qnap RAID System again.

IX – Raid fail – HDDs have no partitions;

When use the following commands to check the HDD, there is no partition or only one partition.

# parted /dev/sdx print

The following is sample.

# blkid ** this command show all partitions on the NAS

Note: fdisk -l cannot show correct partition table for 3TB HDDs

The following tool (x86 only) can help us to calculate correct partition size according to the HDD size. Please save it in your NAS (x86 model) and make sure the file size is 10,086 bytes.

ftp://csdread:csdread@ftp.qnap.com/NAS/utility/Create_Partitions

1. Get every disk size:

# cat /sys/block/sda/size

625142448

2. Get the disk partition list. It should contain 4 partitions if normal;

# parted /dev/sda print

Model: Seagate ST3320620AS (scsi)

Disk /dev/sda: 320GB

Sector size (logical/physical): 512B/512B

Partition Table: msdos

Number Start End Size Type File system Flags

1 32.3kB 543MB 543MB primary ext3 boot

2 543MB 1086MB 543MB primary linux-swap(v1)

3 1086MB 320GB 318GB primary ext3

4 320GB 320GB 510MB primary ext3

3. Run the tool in your NAS to get the recover commands:

# Create_Partitions /dev/sda 625142448

/dev/sda size 305245

disk_size=625142448

/usr/sbin/parted /dev/sda -s mkpart primary 40s 1060289s

/usr/sbin/parted /dev/sda -s mkpart primary 1060296s 2120579s

/usr/sbin/parted /dev/sda -s mkpart primary 2120584s 624125249s

/usr/sbin/parted /dev/sda -s mkpart primary 624125256s 625121279s

If the disk contains none partition, run the 4 commands.

If the disk contains only 1 partition, run the last 3 commands.

If the disk contains only 2 partition, run the last 2 commands.

If the disk contains only 3 partition, run the last 1 commands.

4. Check the disk partition after recover. And it should contain 4 partitions now.

# parted /dev/sda print

Model: Seagate ST3320620AS (scsi)

Disk /dev/sda: 320GB

Sector size (logical/physical): 512B/512B

Partition Table: msdos

Number Start End Size Type File system Flags

1 32.3kB 543MB 543MB primary ext3 boot

2 543MB 1086MB 543MB primary linux-swap(v1)

3 1086MB 320GB 318GB primary ext3

4 320GB 320GB 510MB primary ext3

5. Please then run “sync” or reboot the NAS for the new partition to take effect.

X – RAID fail – Partitions have no md superblock

•If one or all HDD partitions are lost, or the partitions have no md superblock for unknown reason, use the mdadm -CfR command to recreate the RAID.

# mdadm -CfR –assume-clean /dev/md0 -l 5 -n 4 /dev/sda3…

Note:

1.Make sure the disk is in correct sequence. Use “mdadm -E” or check raidtab to confirm

2.If one of the disk is missing or have problem, replace the disk with “missing”.

For example:

# mdadm -CfR –assume-clean /dev/md0 -l 5 -n 4 /dev/sda3 missing /dev/sdc3 /dev/sdd3

XI – No md0 for array

manually create the md0 with mdadm -CfR

XII – NAS fail – Mount HDD(s) with another QNAP NAS

•User can plug the HDD(s) to another same model name NAS to access the data

•User can plug the HDD(s) to other model name NAS to access the data by perform system migration

http://docs.qnap.com/nas/en/index.html?system_migration.htm

note: TS-101/201/109/209/409/409U series doesn’t support system migration

•Since the firmware is also stored on the HDD(s), its firmware version may be different to the firmware on NAS. Firmware upgrade may be required required after above operation

Obtained from this link

Última actualización 05/03/2023 13:36