How To Update Qnap Firmware
This Document contains these titles;
I – Before Starting
II – How to Upgrade Firmware With Qnapfinder
III – How to Upgrade Firmware With Qnap Interface
IV – How to Upgrade Firmware With Putty
V – How to Upgrade Firmware With Qnap Live Update Feature
VI – Trouble Wih hDownloading Qnap Firmware?
VII – Qnap Firmware Update Troubleshooting;
VIII – I Got Problem With Installing Qnap Fimrware / Qnap Firmware Seem as 1.0.0 (1119T) Firmware, How Can I Fix This Problem?
I – Before Starting
For Firmware Update, you must have an installed System. You cant update Firmware without any HDD installed, except some models.
You can updgrade firmware By 4 ways;
First Download Qnap Firmware From http://www.qnap.com
After Download completes, unzip image file to your desktop;
II – How to Upgrade Firmware With Qnapfinder
Open Qnapfinder, choose your Qnap. At the top of the menus, choose tools ->firmware Update. This is the best,easiest and safe way to update,
Update Firmware by Finder
The NAS firmware can be updated by the QNAP Finder. Select a NAS model and choose “Update Firmware” from the “Tools” menu.
Login the NAS as an administrator.
Browse and select the firmware for the NAS. Click “Start” to update the system.
Note: The NAS servers of the same model on the same LAN can be updated by the Finder at the same time. Administrator access is required for system update. |
III – How to Upgrade Firmware With Qnap Interface
Login Qnap and go Administration page ->firmware update and show your Qnap image file.
Update Firmware by Web Administration Page
Note: If the system is running properly, you do not need to update the firmware. |
1. | Download the release notes of the firmware from the QNAP website http://www.qnap.com. Read the release notes carefully to make sure it is required to update the firmware. |
2. | Download the NAS firmware and unzip the IMG file to the computer. |
3. | Before updating the system firmware, back up all the disk data on the NAS to avoid any potential data loss during the system update. |
4. | Click “Browse” to select the correct firmware image for the system update. Click “Update System” to update the firmware. |
IV – How to Upgrade Firmware With Putty
1 – Download Qnap firmware, and unzip it to Public Folder. Your Unzipped Image file should be looking like this : 809U_3.4.2_Build0331.img
2 – Login Qnap via Putty and type this commands;
# mv /share/Public/TS-809U_3.4.2_Build0331.img /mnt/HDA_ROOT/update/
# ln -sf /mnt/HDA_ROOT/update /mnt/update
# /etc/init.d/update.sh /mnt/HDA_ROOT/update/TS-809U_3.4.2_Build0331.img
just reboot the device with this command;
# reboot
From : http://forum.qnap.com/viewtopic.php?p=236036
V – How to Upgrade Firmware With Qnap Live Update Feature
Trust me and stay away from this option. But If you want to learn how;
Live Update
Select “Enable live update” to allow the NAS to automatically check if a new firmware version is available for download from the Internet. If a new firmware is found, you will be notified after logging in the NAS as an administrator.
Click “CHECK FOR UPDATE” to check if any firmware update is available.
Note that the NAS must be connected to the Internet for these features to work.
.
VI – Trouble With Downloading Qnap Firmware?
First, go http://www.qnap.com, then Support ->Download
Choose your NAS device from Category, and your NAS model at the right Dropdown menu. If your device model is not in that list, Choose “Archive” from the left dropdown menu.
You can Choose older firmware if you want. Download from Europe;
After Download completes, you can Install Firmware.
.
VII – Qnap Firmware Update Troubleshooting;
Qnap gives error when I try to install firmware;
Make sure you have an installed device, then try to update firmware
Make sure you download right firmware. Ts-459 and Ts-459U firmware is not the same thing.
If you still get probem about installing firmeware, Please contact with Qnap support.
Qnap Firmware Stuck at %20;
Download a lower Firmware version from this link;
http://web.qnap.com/download.asp
Choose an older Firmware with swithing right dropdown menu with “All”
Restart Qnap and try firmware update again.
VIII – I Got Problem With Installing Qnap Fimrware / Qnap Firmware Seem as 1.0.0 (1119T) Firmware, How Can I Fix This Problem?
How to Fix;
1 – NAS Firmware Update When No HDD(s) Installed
From QNAPedia
NOTE:
- This procedures are used to update the flash / DOM (disk on module) firmware on the NAS.
- Make sure no HDD installed before update.
- Flash image are required for NAS with 16MB flash or 128MB DOM.
- For NAS with 512MB DOM, just use the firmware on QNAP download site (http://www.qnap.com/download.asp).
- This updated procedures are generally used when the NAS can be found by QNAP Finder but cannot be initialized with HDD(s).
- QNAP NAS also store firmware on the HDDs. If you have initialized the NAS with HDDs before, this update procedures will cause the firmware version mismatch between the NAS and the HDDs. You need to update the firmware again after boot up with HDDs installed.
2 – Update Procedures:
- Power off the NAS
- Remove all the HDDs
- Power on the NAS
- After a short beep and a long beep(about two minutes after first short beep), run the QNAP Finder(Make sure the Finder is the latest version).
- QNAP Finder should find the NAS and its IP
- Select the NAS and click “Tools” -> “Update Firmware”
- Input default username and password (admin/admin)
- Select the image files on your PC for the NAS.
- After firmware update, the NAS will be reboot.
- Make sure the firmware have been updated
- Power off the NAS
- Plug all the HDD back
- After power on, follow the messages on the Finder to initialize the NAS.
From : http://wiki.qnap.com/w/index.php?title=NAS_Firmware_Update_When_No_HDD(s)_Installed
Obtained from this link
iSCSI Advanced ACL on QNAP Turbo NAS
This document shows you how to configure iSCSI Advanced ACL (access control list) on QNAP Turbo NAS and verify the settings. All x86-based Turbo NAS models (TS-x39, TS-x59, TS-509, and TS-809) support this feature. Refer to the comparison table: http://www.qnap.com/images/products/comparison/Comparison_NAS.html
Introduction
In a clustered network environment, multiple iSCSI initiators can be allowed to access the same iSCSI LUN (Logical Unit Number) by cluster aware file system or SCSI fencing mechanism. The cluster aware mechanism provides file locking to avoid file system corruption.
If you do not use iSCSI service in a clustered environment and the iSCSI service is connected by more than two initiators, you will need to prevent multiple accesses to an iSCSI LUN at the same time. QNAP iSCSI Advanced ACL (Access Control List) offers you a safe way to set up your iSCSI environment. You can create LUN masking policy to configure the permission of the iSCSI initiators which attempt to access the LUN mapped to the iSCSI targets on the NAS.
LUN Masking is used to define the LUN access rights for a connected iSCSI initiator. If an initiator is not assigned to any LUN Masking policy, the default policy will be applied (See figure 1). You can set up the following LUN access rights for each connected initiators:
- Read-only: The connected initiator can only read the data from the LUNs.
- Read/Write: The connected initiator has Read and Write permission to the LUNs.
- Deny Access: The LUN is invisible to the connected initiator.
Assumptions:
This how-to demonstrates how to configure advanced ACL on QNAP Turbo NAS. The test environment is set as Table 1. Host 1 and Host 2 connect to the same iSCSI target which has 3 LUNs. The file system format of the LUNs is NTFS. The default policy is deny access from all initiators. The LUN permission for the two initiators is listed in Table 2.
Note:If some iSCSI initiators have connected to the iSCSI targets when you are modifying the ACL settings, all modifications will take effect only after those connected initiators disconnect and reconnect to the iSCSI targets.
Figure 1: Flowchart of Advanced ACL
System | Information |
Host 1 | OS: Windows 2008Initiator IQN: iqn.1991-05.com.microsoft:host1 |
Host 2 | OS: Windows 2008Initiator IQN: iqn.1991-05.com.microsoft:host2 |
QNAP NAS | iSCSI target IQN: iqn.2004-04.com.qnap:ts-439proii:iscsi.test.be23e6LUN 1 name: lun-1, size: 10GB
LUN 2 name: lun-2, size: 20GB LUN 3 name: lun-3, size: 30GB |
Table 1: Test Environment
Host 1 | Host 2 | |
LUN 1 | Deny | Read Only |
LUN 2 | Read Only | Read/Write |
LUN 3 | Read/Write | Deny |
Table 2: LUN Masking Settings
iSCSI configuration on QNAP NAS
Default Policy Settings
Login the web administration interface of the NAS as an administrator. Go to «Disk Management» > «iSCSI» > «ADVANCED ACL». Click to edit the default policy.
Figure 2: Default Policy
Select «Deny Access» to deny the access from all LUN. Click «APPLY».
Figure 3: Default Policy Configuration
Configure LUN masking for Host 1:
- Click «Add a Policy».
- Enter «host1-policy» in the «Policy Name».
- Enter «iqn.1991-05.com.microsoft:host1» in the «Initiator IQN».
- Set the LUN permission according to Table 2: LUN Masking Settings.
- Click «APPLY».
Repeat the above steps to configure the LUN permission for Host 2.
Figure 4: Add a New Policy
Figure 5: Configure New Policy for Host 1
Figure 6: Configure New Policy for Host 2
Hint: How do I find the initiator IQN?
On Host 1 and Host 2, start Microsoft iSCSI initiator and click «General». You can find the IQN of the initiator as shown below.
Verify the settings
To verify the configuration, we can connect to this iSCSI target on Host 1 and Host 2.
Verification on Host 1:
- Connect to the iSCSI service. (Refer Connect to iSCSI targets by Microsoft iSCSI initiator on Windows for the details).
- On the Start menu in Windows OS, right click «Computer» > «Manage». On the «Server Manager» window, click «Disk Management».
Host 1 has no access permission to LUN-1 (10 GB). Therefore, only two disks are listed. Disk 1 (20 GB) is read only and Disk 2 (30 GB) is writable.
Verification on the Host 2:
Repeat the same steps when verifying Host 2. Two disks are listed in «Server Manager». Disk 1 (10 GB) is read only and Disk 2 (20 GB) is writable.
QNAP iSCSI initiator on Windows 2008 with MCS feature
QNAP TS-x79 series Turbo NAS with VMware ESXi 5.0
Introduction
This document provides basic guidelines to show you how to configure the QNAP TS-x79 series Turbo NAS as the iSCSI datastore for VMware ESXi 5.0. The TS-x79 series Turbo NAS offers class-leading system architecture matched with 10 GbE networking performance designed to meet the needs of demanding server virtualization. With the built-in iSCSI feature, the Turbo NAS is an ideal storage solution for the VMware virtualization environment.
Recommendations
The following recommendations (illustrated in Figure 1) are provided for you to utilize the QNAP Turbo NAS as an iSCSI datastore in a virtualized environment.
- Create each iSCSI target with only one LUN
Each iSCSI target on the QNAP Turbo NAS is created with two service threads to deal with protocol PDU sending and receiving. If the target hosts multiple LUNs, the IO requests for these LUNs will be served by the same thread set, which results in data transfer bottleneck. Therefore, you are recommended to assign only one LUN to an iSCSI target.
- Use “instant allocation” to create iSCSI LUN
Choose “instant allocation” when creating an iSCSI LUN for higher read/write performance in an I/O intensive environment. Note that the creation time of the iSCSI LUN will be slower than that of a LUN created with “thin provisioning”.
- Store the operation system (OS) and the data of a VM on different LUNs
Have a VM datastore to store a VM with a dedicated vmnic (virtual network interface card) and map another LUN to the VM to store its data. Use another vmnic to connect to the data.
- Use multiple targets with LUNs to create an extended datastore to store the VMs
When a LUN is connected to the ESXi hosts, iSCSI will be represented as a single iSCSI queue on the NAS. When the LUN is shared among multiple virtual machine disks, all I/O has to serialize through the iSCSI queue and only one virtual disk’s traffic can traverse the queue at any point in time. This leaves all other virtual disks’ traffic waiting in line. The LUN and its respective iSCSI queue may become congested and the performance of the VMs may decrease. Therefore, you can create multiple targets with LUNs as an extended datastore to allow more iSCSI queues to deal with VMs access. In this practice, we will use four LUNs as an extended datastore in VMware.
- For normal datastore, limit the number of VMs per datastore to 10
If you just want to have one LUN as a datastore, you are recommended to implement no more than 10 virtual machines per datastore. The actual number of VMs allowed may vary depending on the environment.
Note:
Be careful the datastore shared by multiple ESX hosts VMFS is a clustered file system and uses SCSI reservations as part of its distributed locking algorithms. Administrative operations, such as creating or deleting a virtual disk, extending a VMFS volume, or creating or deleting snapshots, result in metadata updates to the file system using locks, and thus result in SCSI reservations. A reservation causes the LUN to be available exclusively to a single ESX host for a brief period of time, thus impacts the VM performance.
Deployment topology
The following items are required to deploy the Turbo NAS with VMware ESXi 5.0:
- One ESXi 5.0 host
- Three NIC ports on ESXi host
- Two Ethernet switches
- QNAP Turbo NAS TS-EC1279U-RP
Network configuration of ESXi host:
vmnic | IP address/subnet mask | Remark |
vmnic 0 | 10.8.12.28/23 | Console management (not necessary) |
vmnic 1 | 10.8.12.85/23 | A dedicated interface for VM datastore |
vmnic 2 | 168.95.100.101/16 | A dedicated interface for VM data LUN |
Network configuration of TS-EC1279U-RP Turbo NAS :
Network Interface | IP address/subnet mask | Remark |
Ethernet 1 | 10.8.12.125/23 | A dedicated interface for VM datastore |
Ethernet 2 | 168.95.100.100/16 | A dedicated interface for VM data LUN |
Ethernet 3 | Not used in this demonstration | |
Ethernet 4 | Not used in this demonstration |
iSCSI configuration of TS-EC1279U-RP Turbo NAS:
iSCSI Target | iSCSI LUN | Remark |
DataTarget | DataLUN | To store VM data |
VMTarget1 | VMLUN1 | For the extended VM datastore |
VMTarget2 | VMLUN2 | For the extended VM datastore |
VMTarget3 | VMLUN3 | For the extended VM datastore |
VMTarget4 | VMLUN4 | For the extended VM datastore |
Switches
Switch | Port | Remark |
A | 0 | To connect to Ethernet 1 of the Turbo NAS |
A | 1 | To connect to vmnic 1 of the ESXi server |
B | 0 | To connect to Ethernet 2 of the Turbo NAs |
B | 1 | To connect vmnic 2 of the ESXi server |
Note: The iSCSI adapters should be on a private network.
Implementation
Configure the network settings of the Turbo NAS
Login the web administration page of the Turbo NAS. Go to “System Administration” > “Network” > “TCP/IP”. Configure standalone network settings for Ethernet 1 and Ethernet 2.
- Ethernet 1 IP: 10.8.12.125
- Ethernet 2 IP: 168.95.100.100
Note: Enable “Balance-alb” bonding mode or 802.3ad aggregation mode (an 802.3ad compliant switch required) to allow inbound and outbound traffic link aggregation.
Create iSCSI targets with LUNs for the VM and its data on the NAS
Login the web administration page of the Turbo NAS. Go to “Disk Management” > “iSCSI” > “Target Management” and create five iSCSI targets, each with a instant allocation LUN (see Figure 6). VMLUNs (1-4) will be merged as an extended datastore to store your VM. DataLUN, 200 GB with instant allocation, will be used as the data storage for the VM.
For the details of creating iSCSI target and LUN on the Turbo NAS, please see the application note “Create and use the iSCSI target service on the QNAP NAS” on http://www.qnap.com/en/index.php?lang=en&sn=5319. Once the iSCSI targets and LUNs have been created on the Turbo NAS, use VMware vSphere Client to login the ESXi server.
Configure the network settings of the ESXi server
Run VMware vSphere Client and select the host. Under “Configuration” > “Hardware” > “Networking”, click “Add Networking” to add a vSwitch with a VMkernal Port (VMPath) for the VM datastore connection. The VM will use this iSCSI port to communicate with the NAS. The IP address of this iSCSI port is 10.8.12.85. Then, add another vSwitch with a VMkernal Port (DataPath) for the data connection of the VM. The IP address of this iSCSI port is 168.95.100.101.
Enable iSCSI software adapter in ESXi
Select the host. Under “Configuration” > “Hardware” > “Storage Adapters”, select “iSCSI Software Adapter”. Then click “Properties” in the “Details” panel.
Click “Configure” to enable the iSCSI software adapter.
Bind the iSCSI ports to the iSCSI adapter
Select the host. Under “Configuration” > “Hardware” > “Storage Adapters”, select “iSCSI Software Adapter”. Then click “Properties” in the “Details” panel. Go to the “Network Configuration” tab and then click “Add” to add the VMkernel ports: VMPath and VMdata.
Connect to the iSCSI targets
Select the host. Under “Configuration” > “Hardware” > “Storage Adapters”, select “iSCSI Software Adapter”. Then click “Properties” in the “Details” panel. Go to the “Dynamic Discovery” tab and then click “Add” to add one of your NAS IP address (10.8.12.125 or 168.95.100.100). Then click “Close” to rescan the software iSCSI bus adapter.
After rescanning the software iSCSI bus adapter, you can see the connected LUN in the “Details” panel.
Select the preferred path for each LUN
Right click each VMLUNs (1-4) and click “Manage Path…” to specify their paths.
Change Path Selection to “Fixed (VMware)”.
Then, select the dedicate path (10.8.12.125) and click “Preferred” to set the path for the VM connection.
Repeat the above steps to set up the preferred path (168.95.100.100) for DataLUN (200 GB).
Create and enable a datastore
Once the iSCSI targets have been connected, you can add your datastore on a LUN. Select the host. Under “Configuration” > “Hardware” > “Storage”, click “Add storage…” and select one of the VMLUNs (1 TB) to enable the new datastore (VMdatastore). After a few seconds, you will see the datastore in the ESXi server.
Merge other VMLUNs to the datastore
Select the host. Under “Configuration” > “Hardware” > “Storage”, right click VMdatastore and click “Properties”.
Click “Increase”.
Select the other three VMLUNs and merge them as a datastore.
After all the VMLUNs are merged, the size of the VMdatastore will become 4 TB.
Create your VM and store it in the VM datastore
Right click the host to create a new virtual machine, select VMdatastore as its destination storage. Click “Next” and follow the wizard to create a VM.
Attach the DataLUN to the VM
Right click the VM that you just created, click “Edit Settings…” to add a new disk. Then, click “Next”.
Select “Raw Device Mappings” (RDM) and click “Next”.
Select DataLUN (200 GB) and click “Next”. Follow the wizard to add a new hard disk.
After a few seconds, a new hard disk will be added on your VM.
Your VM is ready to use
All the VM settings have been finished. Now start the VM and install your applications or save the access data in the RDM disk. If you wish to create another VM, please save the second VM to VMdatastore and create a new LUN on the Turbo NAS for its data access.
Hot-swapping the hard drives when the RAID crashes
«No server downtime when you need to replace the RAID drives»
Contents
- Logical volume status when the RAID operates normally
- When a drive fails, follow the steps below to check the drive status:
- Install a new drive to rebuild RAID 5 by hot swapping
Procedure of hot-swapping the hard drives when the RAID crashes
RAID 5 disk mirroring provides highly secure data protection. You can use two hard drives of the same capacity to create a RAID 5 array. The RAID 5 creates an exact copy of data on the member drives. RAID 5 protects data against single drive failure. The usable capacity of RAID 5 is the capacity of the smallest member drive. It is particularly suitable for personal or company use for important data saving.
Logical volume status when the RAID operates normally
When the RAID volume operates normally, the volume status is shown as Ready under ‘Disk Management’ > ‘Volume Management’ section.
When a drive fails, follow the steps below to check the drive status:
When RAID volume operates normally, the volume status is shown as Ready in the Current Disk Volume Configuration section.
- The server beeps for 1.5 sec twice when the drive fails.
- The Status LED flashes red continuously.
- Check the Current Disk Volume Configuration section. The volume status is In degraded mode.
You can check the error and warning messages for drive failure and disk volume in degraded mode respectively in the Event Logs.
Note: You can send and receive alert e-mail by configuring the alert notification. For the settings, please refer to the System Settings/ Alert Notification section in the user manual. |
Install a new drive to rebuild RAID 5 by hot swapping
Please follow the steps below to hot swap the failed hard drive:
- Prepare a new hard drive to rebuild RAID 5. The capacity of the new drive should be at least the same as that of the failed drive.
- Insert the drive to the drive slot of the server. The server beeps for 1.5 seconds twice. The Status LED flashes red and green alternatively.
- Check the Current Disk Volume Configuration section. The volume status is Rebuilding and the progress is shown.
- When the rebuilding is completed, the Status LED lights in green and the volume status is Ready. RAID 5 mirroring protection is now active.
- You can check the disk volume information in the Event Logs.
Note: Do not install the new drive when the system is not yet in degraded mode to avoid unexpected system failure. |
QNAP RAID System Errors & How To Fix
I – Introduction;
II -How to Fix if RAID seems “In Degreed”
III -How to Fix if RAID seems “In Degreed Mode, Read Only, Failed Drive(s) x” Cases;
IV – How to Fix if RAID Becomes “Unmounted” or “Not Active”
V – “Recover” Doesnt Work, How to Fix if RAID Becomes “Unmounted” or “Not Active”
VI – Stuck at Booting / Starting Sevices, Please Wait / Cant Even Login Qnap Interface
VII – IF “config_util 1″ Command gives “Mirror of Root Failed” Eror;
VIII – How to Fix “Broken RAID” Scenario
IX – If None of These Guides Works;
..
I – Introduction;
Warning : This documents are recomended for Professional users only. If you dont know what you’r doing, you may damage your RAID which cause loosing data. Qnapsupport Taiwan works great to solve this kind of RAID corruptions easly, and My advice is directly contact with them at this kind of cases.
1 – If documetn says “Plug out HDD and Plug in” always use original RAID HDD at the same slot. Dont use new HDDs!
2 – If you can access your data after these process, backup them quickly.
3 – You can loose data on Hardware RAID devices and other NAS brands, but nearly you cant loose data on Qnap, but recovering your datas may cause time lost, so Always double backup recomended for future cases.
4 – If one of your HDD’s cause Qnap to restart itself when you plug in to NAS, Dont use that HDD to solve these kind of cases.
This document is not valid for Qnap Ts-109 / Ts-209 / Ts-409 & Sub Models.
..
II -How to Fix if RAID seems “In Degreed”
If your RAID system seems as down below, use this document. If not, Please dont try anything in this document:
In this case, RAID information seems “RAID 5 Drive 1 3″ so, your 2.th HDD is out of RAID. Just plug out broken HDD from 2.Th HDD slot, wait around 15 seconds, and Plug in new HDD with the same size.
..
III -How to Fix if RAID seems “In Degreed Mode, Read Only, Failed Drive(s) x” Cases;
If your system seems In Degraded, Failed Drive X, you probably loose more HDD than RAID tolerated, so;
1 – Take your Backup,
2 – Re-Install Qnap From Begining.
Qnap data protection features doesnt let you loose data even if your 5 HDD gives Bad sector errors in 6 HDD RAID Systems.
..
IV – How to Fix if RAID Becomes “Unmounted” or “Not Active”
If your RAID system seems as the picture down below, fallow this document. If not, Please dont try anything in this document:
1 – Update your Qnap fimware with Qnapfinder 3.7.2 or higher firmware. Just go to Disk Managment -> RAID managment. Choose your RAID and press “Recover” to fix.
If this doesnt work and “Recover” button is still avaible, just fullow these steps;
1 – While device is still working, Plug out HDD that you suspect which maybe broken, and press Recover button again.
2 – If you plug out HDD —> then Plug it in the same slot again —> Press “Recover” Once Again and it should repair RAID;
3 – Here is log files;
4 – And RAID systems comes back with “Indegreed Mode”, so just quickly backup your datas, and Reinstall Qnap without broken HDD again.
We loose 2 HDD from RAID 5, so its impossable to fix this RAID again. Just Re-install.
..
V – “Recover” Doesnt Work, How to Fix if RAID Becomes “Unmounted” or “Not Active”
1 – Download Putty & login Qnap,
2 – Make sure the raid status is active
To understant, type :
#more /proc/mdstat
Also If you want to Stop Running Services;
# /etc/init.d/services.sh stop
Unmount the volume:
# umount /dev/md0
Stop the array:
# mdadm -S /dev/md0
Now Try This command
For RAID 5 with 4 HDD’s
mdadm -CfR –assume-clean /dev/md0 -l 5 -n 4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
-l 5 : means RAID 5. If its RAID 6, try -l 6;
-n 4 means number of your HDDs, If you have 8 HDD, try –n 8
/dev/sda3 means your first HDD.
/dev/sdd3 means your 4.th HDD
Example;
If you have RAID 6 with 8 HDD, change command line with this;
mdadm -CfR –assume-clean /dev/md0 -l 6 -n 8 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3 /dev/sdg3 /dev/sdh3
If one of your HDD has Hardware Error, which cause Qnap restart itself, plug out that HDD, and type “missing” command for that HDD;
Example, if your 2.th HDD is broken, use this command;
# mdadm -CfR –assume-clean /dev/md0 -l 5 -n 4 /dev/sda3 missing /dev/sdc3 /dev/sdd3
3 – try manually mount
# mount /dev/md0 /share/MD0_DATA -t ext3
# mount /dev/md0 /share/MD0_DATA -t ext4
# mount /dev/md0 /share/MD0_DATA -o ro (read only)
4 – So, Result Should be Like That;
..
VI – Stuck at Booting / Starting Sevices, Please Wait / Cant Even Login Qnap Interface
Just Plug out broken HDD and restart Qnap again. This should restarts Qnap back again.
If you’r not sure which HDD is broken, Please follw this steps;
1 – Power Off the NAS.
2 – Plug out All HDDs,
3 – Start Qnap without HDDs,
4 – Qnapfinder Should find Qnap in a few minutes. Now, Plug in all HDD’s back again same slots.
5 – Download Putty from this link and login with admin / admin usernam / password;
http://www.chiark.greenend.org.uk/~sgta … nload.html
6 – Type this command lines which I marked blue;
# config_util 1 -> If result of this command give “Root Failed” dont go on and contact with Qnapsupport team
# storage_boot_init 1 ->
# df -> IF dev/md9 (HDA_ROOT) seems full, please contact with Qnapsupport.
(Now, you can Reboot your device with this command. ıf you want to reset your configration, please skip this.)
# reboot
..
VII – IF “config_util 1″ Command gives “Mirror of Root Failed” Eror;
1 – Power Off the NAS.
2 – Plug out All HDDs,
3 – Start Qnap without HDDs,
4 – Qnapfinder Should find Qnap in a few minutes. Now, Plug in all HDD’s back again same slots.
5 – Download Putty from this link and login with admin / admin usernam / password;
http://www.chiark.greenend.org.uk/~sgta … nload.html
6 – Type this command lines which I marked blue;
# storage_boot_init 2 -> (this time type storage_boot_init 2, not storage_boot_init 1)
This command should turn Qnap back to last “Indegreed” Mode and you should get this kind of information after this command:
7 – Download Winscp and Login Qnap. Go to ->share ->MD0_DATA folder, and backup your datas Quickly.
..
VIII – How to Fix “Broken RAID” Scenario
First, I try to fix “Recovery” method, but doesnt work. At Qnap RAID managment menu, I check All HDDs, But all of them seems good. So I Login with Putty, and type these commands;
mdadm -E /dev/sda3
mdadm -E /dev/sdb3
mdadm -E /dev/sdc3
mdadm -E /dev/sdd3
Except first HDD, other 3 HDD’s doesn have md superblock; Also I try “config_util 1” & “storage_boot_init 2” commands, but both of them gives error;
Costumer got RAID 5 (-l 5) with 4 HDD (-n 4), so ı type this command;
# mdadm -CfR –assume-clean /dev/md0 -l 5 -n 4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
then mount with this command;
# mount /dev/md0 /share/MD0_DATA -t ext4
And works perfect.
Also here is Putty Steps;
login as: admin
admin@192.168.101.16′s password:
[~] # mdadm -E /dev/sda3
/dev/sda3:
Magic : a92b4efc
Version : 00.90.00
UUID : 2d2ee77d:045a6e0f:438d81dd:575c1ff3
Creation Time : Wed Jun 6 20:11:14 2012
Raid Level : raid5
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Fri Jan 11 10:24:40 2013
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : 8b330731 – correct
Events : 0.4065365
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 3 0 active sync /dev/sda3
0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
[~] # mdadm -E /dev/sdb3
mdadm: No md superblock detected on /dev/sdb3.
[~] # mdadm -CfR –assume-clean /dev/md0 -l 5 -n 4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
mdadm: /dev/sda3 appears to contain an ext2fs file system
size=1560869504K mtime=Fri Jan 11 10:22:54 2013
mdadm: /dev/sda3 appears to be part of a raid array:
level=raid5 devices=4 ctime=Wed Jun 6 20:11:14 2012
mdadm: /dev/sdd3 appears to contain an ext2fs file system
size=1292434048K mtime=Fri Jan 11 10:22:54 2013
mdadm: array /dev/md0 started.
[~] # more /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sdd3[3] sdc3[2] sdb3[1] sda3[0]
5855836800 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md4 : active raid1 sda2[2](S) sdd2[0] sdc2[3](S) sdb2[1]
530048 blocks [2/2] [UU]
md13 : active raid1 sda4[0] sdc4[3] sdd4[2] sdb4[1]
458880 blocks [4/4] [UUUU]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sda1[0] sdc1[3] sdd1[2] sdb1[1]
530048 blocks [4/4] [UUUU]
bitmap: 1/65 pages [4KB], 4KB chunk
unused devices:
[~] # mount /dev/md0 /share/MD0_DATA -t ext4
[~] #
Here is Result;
..
IX – If None of These Guides Works;
For rarely, I cant solve these kind of problems on Qnap. and sometimes I plug HDD’s to another Qnap to solve these kind of problems.
User different Qnap which has different hardware to solve this kind of cases!;
When you plug HDD, and restart device, Qnap will ask you to to Migrate or Reinstall Qnap, choose “Migrate” Option.
In this case, Qnap coulnt boot at Ts-659 Pro, putty commands doesnt work and we cant even enter Qnap admin interface;
HDD’s couldnt boot from Ts-659 Pro, but after installing different device (At this case, I plug them to Ts-509 Pro) device boots fine. But of course you must use 6 or more HDD supported Qnap to save your datas.
So, If evertything fails, Just try these process with another Qnap.
Obtained from this link
Stuct At Booting When HDD’s Are Not Plugged In
If you cannot access the NAS after Step 3, please do the following:
- Turn off the NAS.
- Take out all the hard disk drives.
- Restart the NAS.
You will hear a beep after pressing the power button, followed by 2 beeps 2 minutes later. If you cannot hear first beep, Please contact your local reseller or distributor for repair or replacement service.
If you cannot Hear the two beeps, and Qnapfinder couldnt find your NAS, the NAS Firmware is Damaged. To fix this problem, please follow “Qnap firmware Recovery / Reflash” Documents for your device model.
If you couldnt solve problem by yourself, Please contact your local reseller or distributor for repair or replacement service
If Qnapfinder can find Qnap, fallow these steps;
1 – Download Putty software;
http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
2 – Plug in all of your HDD’s with right order while device is still working. Dont restart Qnap yet. Check if all HDD’s are allright and recognized by Qnap. If any of HDD doesnt recovnized or size seems “0″, plug out that HDD.
3 –Log with putty by entering the Qnap IP / user name / password. (Username / Password: admin / admin. Port need to enter 22.)
Now enter these command down below; (Choose command from this screen and “copy” Then go to putty, just pr “pess right mouse button once. By this way, you can paste commands automaticly)
# config_util 1 -> (it must say “mirror of root succeed”. if it gives “mirror of root failed” error, stop this step and request help from Qnapsupport.)
# storage_boot_init 1
# df
If dev/md9 (HDA_ROOT) appears full, please contact QNAP support team
# reboot
Now Qnap should reboot well. If you can reach Qnap interface after restart, check RAID system, and change broken HDD with a new one.
Obtained from this link
QNAP TS-212 How to rebuild RAID manually from telnet
Scenario = replace a disk in QNAP TS-212 with RAID 1 configuration active
RAID rebuild should start automatically, but some times it could happen you got stuck with 1 Single Disk + 1 Mirroring Disk Volume:
According to the QNAP Support – How can I migrate from Single Disk to RAID 0/1 in TS-210/TS-212? , TS-210/TS-212 does not support Online RAID Level Migration. Therefore, please backup the data on the single disk to another location, install the second hard drive, and then recreate the new RAID 0/1 array (hard drive must be formatted).
The workaround is this:
- Telnet to NAS as Admin
- Check your current disk configuration for Disk #1 and Disk #2 =
fdisk -l /dev/sda
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System
/dev/sdb1 1 66 530125 83 Linux
/dev/sdb2 67 132 530142 83 Linux
/dev/sdb3 133 121538 975193693 83 Linux
/dev/sdb4 121539 121600 498012 83 Linuxfdisk -l /dev/sdb
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System
/dev/sda1 1 66 530125 83 Linux
/dev/sda2 67 132 530142 83 Linux
/dev/sda3 133 121538 975193693 83 Linux
/dev/sda4 121539 121600 498012 83 Linux - SDA is the first disk, SDB is the second disk
- Verify the current status of RAID with this command =
mdadm –detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Thu Sep 22 21:50:34 2011
Raid Level : raid1
Array Size : 486817600 (464.27 GiB 498.50 GB)
Used Dev Size : 486817600 (464.27 GiB 498.50 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistentIntent Bitmap : Internal
Update Time : Thu Jul 19 01:13:58 2012
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0UUID : 72cc06ac:570e3bf8:427adef1:e13f1b03
Events : 0.1879365Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 3 1 active sync /dev/sda3 - As you can see the /dev/sda3 is working, so disk #1 is OK, but disk #2 is missing from RAID
- Check if Disk #2 /dev/sdb is mounted (it should be) =
mount/proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=32M)
none on /proc/bus/usb type usbfs (rw)
/dev/sda4 on /mnt/ext type ext3 (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw)
/dev/md0 on /share/MD0_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,delalloc,noacl)
tmpfs on /var/syslog_maildir type tmpfs (rw,size=8M)
/dev/sdt1 on /share/external/sdt1 type ufsd (rw,iocharset=utf8,dmask=0000,fmask=0111,force)
tmpfs on /.eaccelerator.tmp type tmpfs (rw,size=32M)
/dev/sdb3 on /share/HDB_DATA type ext3 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,noacl)
- Dismount the /dev/sdb3 Disk #2 with this command =
umount /dev/sdb3 - Add Disk #2 into the RAID /dev/md0 =
mdadm /dev/md0 –add /dev/sdb3
mdadm: added /dev/sdb3 - Check the RAID status and the rebuild should be started automatically =
mdadm –detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Thu Sep 22 21:50:34 2011
Raid Level : raid1
Array Size : 486817600 (464.27 GiB 498.50 GB)
Used Dev Size : 486817600 (464.27 GiB 498.50 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistentIntent Bitmap : Internal
Update Time : Thu Jul 19 01:30:27 2012
State : active, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1Rebuild Status : 0% complete
UUID : 72cc06ac:570e3bf8:427adef1:e13f1b03
Events : 0.1879848Number Major Minor RaidDevice State
2 8 19 0 spare rebuilding /dev/sdb3
1 8 3 1 active sync /dev/sda3 - Check the NAS site for the rebuild % progress
- After the RAID rebuild complete, restart NAS to clean all previous mount point folder for sdb3