This is the second time I have set up a Compellent, and this time I figured I would go all the way down the rabbit’s hole and delve into switch optimization for iSCSI traffic. My last project was with Cisco switches, this time, it’s with Dell. A pair of PowerConnect 6224 switches to be precise… main reason being, Dell purchased Compellent not long ago, so like-products, yadda yadda. Alas, I digress.
In previous talks with Compellent, I was give the ’10 commandments’ of switch configuration from at least two different representatives upon having iSCSI issues. These commandments roughly consisted of:
• Gigabit Full Duplex connectivity between Storage Center and all local Servers
• Auto-Negotiate for all switches that will correctly negotiate at Gigabit Full Duplex
• Gigabit Full Duplex hard set for all iSCSI ports, for both Storage Center and Servers for switches that do not correctly negotiate
• Bi-Directional Flow Control enabled for all Switch Ports that servers or controllers are using for iSCSI traffic.
• Bi-Directional Flow Control enabled for all Server Ports used for iSCSI (Storage Center and QLogic HBA’s automatically enable it).
• Bi-Directional Flow Control enabled for all ports that handle iSCSI traffic. This includes all devices in between two sites that are used for replication.
• Separate VLAN for iSCSI.
• Two separate networks or VLANs for multipathed iSCSI.
• Two separate IP subnets for the seperate networks or VLANs in multipathed iSCSI.
• Unicast storm control disabled on every switch that handles iSCSI traffic.
• Multicast disabled at the switch level for any iSCSI VLANs.
o Multicast storm control enabled (if available) when multicast can not disabled.
• Broadcast disabled at the switch level for any iSCSI VLANs.
o Broadcast storm control enabled (if available) when broadcast can not disabled.
• Routing disabled between regular network and iSCSI VLANs.
• Do not use Spanning Tree (STP or RSTP) on ports which connect directly to end nodes (the server or Compellent controllers iSCSI ports.) If you must use it, enable the Cisco PortFast option on these ports so that they are configured as edge ports.
• Ensure that any switches used for iSCSI are of a non-blocking design.
• When deciding which switches to use, remember that you are running SCSI traffic over it. Be sure to use a quality managed enterprise class networking equipment. It is not recommended to use SBHO (small business/home office) class equipment outside of lab/test environments.
• Verify optimal MTU for replications. Default is 1500 but sometimes WAN circuits or VPNs can create additional overhead which can cause packet fragmentation. This fragmentation can suboptimal performance. The MTU is adjustable via the GUI in 5.x Storage Center Firmware.
For Jumbo Frame Support
• Some switches have limited buffer sizes and can only support Flow Control or Jumbo Frames, but not both at the same time. Compellent strongly recommends choosing Flow Control.
• All devices connected through iSCSI need to support 9k jumbo frames.
• All devices used to connect iSCSI devices need to support it.
o This means every switch, router, WAN Accelerator and any other network device that will handle iSCSI traffic needs to support 9k Jumbo Frames.
• If the customer is not 100% positive that every device in their iSCSI network supports 9k Jumbo Frames, then they should NOT turn on Jumbo Frames.
• QLogic 4010 series cards (Early Compellent iSCSI Cards) do not support Jumbo Frames.
o In the Storage Center GUI default screen, expand the tree in the following order Controllers->SN#(for the controller)->IO Cards->iSCSI->Highlight the port and the general tab should list the model number in the description.
• Because devices on both sides (server and SAN) need Jumbo Frames enabled, the change to enable to disable Jumbo Frames is recommended during a maintenance window. If servers have it enabled first, the Storage Center will not understand their packets. If Storage Center enables it first, servers will not understand its packets.
Okay, so it was more than 10. Anyway, all that roughly distills down to the following needs:
1) Set spanning-tree mode rstp and enable portfast on all ports, or disable spanning-tree all together.
2) Enable jumbo frame support on all iSCSI ports
3) Disable unicast storm-control on all iSCSI ports
4) Enable multicast storm-control on all iSCSI ports
5) Enable broadcast storm-control on all iSCSI ports
6) Enable flow control
For each switch, the commands are different… but for Dell PowerConnect 6224 in particular, these needs translated into the following commands (in order, considering I used ports 1-12 for iSCSI)
spanning-tree mode rstp
no storm-control unicast
That was it, switch optimized, on we go!
KB ID: 1007
Product: Veeam Management Pack for VMware (SCOM);Veeam Smart Plug-in (SPI) for VMware;Veeam Monitor;Veeam ONEVersion: AllPublished: 2011-07-14
Last Modified: 2013-01-15
One of the Veeam Monitoring Products (nWorks SPI, MP, Monitor, etc.) reports that a host hardware status has changed and that their sensor states “Sensor VMware Rollup Health State equal Unknown”, “Sensor VMware Rollup Health State equal Red”, or “Sensor VMware Rollup Health State equal Yellow”.
These alerts are good to know in case hosts in your environment have hardware issues, the issue will be notified in the alert, and the severity of the issue by VMware’s color scale (Yellow – Something is wrong but doesn’t involve data loss, Red – Data loss potential or production down, Unknown– Not knowing what the current status is of the sensor).
The problem becomes when you resolve the alert on the host, and the host reverts back to “Normal operating conditions” or “Green”, but Veeam Products are continuing to report the original problem. Other issues can include the examples listed below as well.
You may experience the following problem with nworks products:
“Sensor VMware Rollup Health State equal Unknown” messages are displayed on a standalone host within a cluster, and there are no alerts on this matter.
In Veeam Monitor, you may notice that hardware status warnings are visible using vCenter, but there are no hardware related alarms visible in Veeam Monitor.
Veeam monitoring products pull the Hardware Status info of the monitored object using a MOB-connection (MOB, Managed Object Browser). At the same time, the VMware vSphere client uses a different method to obtain this type of data. Because of this difference, you may see different information in the VMware vSphere client and Veeam monitoring products.
NOTE: Current and correct hardware status of monitored objects is always available via VMware MOB. The hardware (host in this case) always takes precedence over the VC for the most accurate information.
In order to narrow down the issue, we should compare the hardware status information for monitored objects using both VMware vCenter’s MOB and Host’s MOB.
How to check the hardware sensors using VMware MOB:
1. Open the VMware vCenter server’s MOB web link using your Internet browser (https://[your_vCenter_server_address]/mob) and follow this path:
content -> rootFolder -> childEntity -> hostFolder -> childEntity -> host [select appropriate host] -> runtime -> healthSystemRuntime -> systemHealthInfo -> numericSensorInfo
2. Find HostNumericSensorInfo related to VMware Rollup Health State. Make sure that the summary string is “Sensor is operating under normal conditions” and the label string is “Green”.
As you can see from the screenshot, this host is having a problem according to the information provided in vCenter server’s MOB (VMware Rollup Health State is in Red). What we where expecting to see is the “Green” status with running as normal conditions.
3. Then open the VMware HOST’s MOB web link using your Internet browser (https://[your_VMware_host_address]/mob) and follow this path:
content -> rootFolder -> childEntity -> hostFolder -> childEntity -> host -> runtime -> healthSystemRuntime -> systemHealthInfo -> numericSensorInfo
4. Find HostNumericSensorInfo related to the VMware Rollup Health State. Make sure that the summary string is “Sensor is operating under normal conditions” and the label string is “Green”.
As you can see from the screenshot, this host is NOT having a problem according to the information provided in host’s MOB (VMware Rollup Health State is in Green).
5. Please make sure that vCenter’s and Host’s MOBs show you the same status/summary for the VMware Rollup Health State.
If you see any difference between the VMware vSphere client and/or VMware MOB statuses (as in the example above), please open a support case with VMware’s support team.
Please note that for Memory and Storage, hardware sensors will pull the data from additional sections of MOB.
Here are the paths for Memory:
content -> rootFolder -> childEntity -> hostFolder -> childEntity -> host [select appropriate host] -> runtime -> healthSystemRuntime -> hardwareStatusInfo -> memoryStatusInfoOpen the VMware HOST’s MOB web link using your Internet browser (https://[your_VMware_host_address]/mob) and follow this path:
content -> rootFolder -> childEntity -> hostFolder -> childEntity -> host -> runtime -> healthSystemRuntime -> hardwareStatusInfo -> memoryStatusInfo
Here are the paths for Storages:
content -> rootFolder -> childEntity -> hostFolder -> childEntity -> host [select appropriate host] -> runtime -> healthSystemRuntime -> hardwareStatusInfo -> storageStatusInfoOpen the VMware HOST’s MOB web link using your Internet browser (https://[your_VMware_host_address]/mob) and follow this path:
content -> rootFolder -> childEntity -> hostFolder -> childEntity -> host -> runtime -> healthSystemRuntime -> hardwareStatusInfo -> storageStatusInfo
If you see differences between vCenter’s and HOST’s MOB, it’s strongly recommended that you open a support case with VMware Support team in order to get the issue resolved.
NOTE: Also, Veeam has found that simply putting the host into maintenance mode and then exiting maintenance mode can address the problem. We still suggest that you open a support case with VMware Support team on this matter.
NOTE: For additional troubleshooting, you can do the following steps (below) to resolve this issue, but this is to be used at your own risk. If anything fails, or if this doesn’t resolve the issue, you will still need to contact VMware support.
On the VC, do the following to resolve this conflict at your own risk.
1. disable EVC on cluster.
2. vmotion machines over to secondary node.
3. maintenance mode / evict “faulted” node from cluster.
4. remove “faulted” node from vcenter.
5. log into “faulted” node via ILOM, restart management agents
6. re-add node back into vcenter.
7. re-add node to cluster.
8. re-enable EVC.
Please note that if you do not use some steps (EVC, Clustering, etc.) you can ignore these steps. The main idea is to remove all VM’s from the host, remove the host from the cluster/VC, restart the host (or the management agents), then add the host back into the VC/cluster. This process must be done one at a time per host to resolve the issue.
For additional information regarding hardware monitoring, check out the “vSphere Client Hardware Health Monitoring” whitepaper from VMware (4.1). http://www.vmware.com/files/pdf/techpaper/hwhm41_technote.pdf
This document shows you how to configure iSCSI Advanced ACL (access control list) on QNAP Turbo NAS and verify the settings. All x86-based Turbo NAS models (TS-x39, TS-x59, TS-509, and TS-809) support this feature. Refer to the comparison table: http://www.qnap.com/images/products/comparison/Comparison_NAS.html
In a clustered network environment, multiple iSCSI initiators can be allowed to access the same iSCSI LUN (Logical Unit Number) by cluster aware file system or SCSI fencing mechanism. The cluster aware mechanism provides file locking to avoid file system corruption.
If you do not use iSCSI service in a clustered environment and the iSCSI service is connected by more than two initiators, you will need to prevent multiple accesses to an iSCSI LUN at the same time. QNAP iSCSI Advanced ACL (Access Control List) offers you a safe way to set up your iSCSI environment. You can create LUN masking policy to configure the permission of the iSCSI initiators which attempt to access the LUN mapped to the iSCSI targets on the NAS.
LUN Masking is used to define the LUN access rights for a connected iSCSI initiator. If an initiator is not assigned to any LUN Masking policy, the default policy will be applied (See figure 1). You can set up the following LUN access rights for each connected initiators:
- Read-only: The connected initiator can only read the data from the LUNs.
- Read/Write: The connected initiator has Read and Write permission to the LUNs.
- Deny Access: The LUN is invisible to the connected initiator.
This how-to demonstrates how to configure advanced ACL on QNAP Turbo NAS. The test environment is set as Table 1. Host 1 and Host 2 connect to the same iSCSI target which has 3 LUNs. The file system format of the LUNs is NTFS. The default policy is deny access from all initiators. The LUN permission for the two initiators is listed in Table 2.
Note:If some iSCSI initiators have connected to the iSCSI targets when you are modifying the ACL settings, all modifications will take effect only after those connected initiators disconnect and reconnect to the iSCSI targets.
Figure 1: Flowchart of Advanced ACL
|Host 1||OS: Windows 2008Initiator IQN: iqn.1991-05.com.microsoft:host1|
|Host 2||OS: Windows 2008Initiator IQN: iqn.1991-05.com.microsoft:host2|
|QNAP NAS||iSCSI target IQN: iqn.2004-04.com.qnap:ts-439proii:iscsi.test.be23e6LUN 1 name: lun-1, size: 10GB
LUN 2 name: lun-2, size: 20GB
LUN 3 name: lun-3, size: 30GB
Table 1: Test Environment
|Host 1||Host 2|
|LUN 1||Deny||Read Only|
|LUN 2||Read Only||Read/Write|
Table 2: LUN Masking Settings
iSCSI configuration on QNAP NAS
Default Policy Settings
Login the web administration interface of the NAS as an administrator. Go to “Disk Management” > “iSCSI” > “ADVANCED ACL”. Click to edit the default policy.
Figure 2: Default Policy
Select “Deny Access” to deny the access from all LUN. Click “APPLY”.
Figure 3: Default Policy Configuration
Configure LUN masking for Host 1:
- Click “Add a Policy”.
- Enter “host1-policy” in the “Policy Name”.
- Enter “iqn.1991-05.com.microsoft:host1” in the “Initiator IQN”.
- Set the LUN permission according to Table 2: LUN Masking Settings.
- Click “APPLY”.
Repeat the above steps to configure the LUN permission for Host 2.
Figure 4: Add a New Policy
Figure 5: Configure New Policy for Host 1
Figure 6: Configure New Policy for Host 2
Hint: How do I find the initiator IQN?
On Host 1 and Host 2, start Microsoft iSCSI initiator and click “General”. You can find the IQN of the initiator as shown below.
Verify the settings
To verify the configuration, we can connect to this iSCSI target on Host 1 and Host 2.
Verification on Host 1:
- Connect to the iSCSI service. (Refer Connect to iSCSI targets by Microsoft iSCSI initiator on Windows for the details).
- On the Start menu in Windows OS, right click “Computer” > “Manage”. On the “Server Manager” window, click “Disk Management”.
Host 1 has no access permission to LUN-1 (10 GB). Therefore, only two disks are listed. Disk 1 (20 GB) is read only and Disk 2 (30 GB) is writable.
Verification on the Host 2:
Repeat the same steps when verifying Host 2. Two disks are listed in “Server Manager”. Disk 1 (10 GB) is read only and Disk 2 (20 GB) is writable.
This document provides basic guidelines to show you how to configure the QNAP TS-x79 series Turbo NAS as the iSCSI datastore for VMware ESXi 5.0. The TS-x79 series Turbo NAS offers class-leading system architecture matched with 10 GbE networking performance designed to meet the needs of demanding server virtualization. With the built-in iSCSI feature, the Turbo NAS is an ideal storage solution for the VMware virtualization environment.
The following recommendations (illustrated in Figure 1) are provided for you to utilize the QNAP Turbo NAS as an iSCSI datastore in a virtualized environment.
- Create each iSCSI target with only one LUN
Each iSCSI target on the QNAP Turbo NAS is created with two service threads to deal with protocol PDU sending and receiving. If the target hosts multiple LUNs, the IO requests for these LUNs will be served by the same thread set, which results in data transfer bottleneck. Therefore, you are recommended to assign only one LUN to an iSCSI target.
- Use “instant allocation” to create iSCSI LUN
Choose “instant allocation” when creating an iSCSI LUN for higher read/write performance in an I/O intensive environment. Note that the creation time of the iSCSI LUN will be slower than that of a LUN created with “thin provisioning”.
- Store the operation system (OS) and the data of a VM on different LUNs
Have a VM datastore to store a VM with a dedicated vmnic (virtual network interface card) and map another LUN to the VM to store its data. Use another vmnic to connect to the data.
- Use multiple targets with LUNs to create an extended datastore to store the VMs
When a LUN is connected to the ESXi hosts, iSCSI will be represented as a single iSCSI queue on the NAS. When the LUN is shared among multiple virtual machine disks, all I/O has to serialize through the iSCSI queue and only one virtual disk’s traffic can traverse the queue at any point in time. This leaves all other virtual disks’ traffic waiting in line. The LUN and its respective iSCSI queue may become congested and the performance of the VMs may decrease. Therefore, you can create multiple targets with LUNs as an extended datastore to allow more iSCSI queues to deal with VMs access. In this practice, we will use four LUNs as an extended datastore in VMware.
- For normal datastore, limit the number of VMs per datastore to 10
If you just want to have one LUN as a datastore, you are recommended to implement no more than 10 virtual machines per datastore. The actual number of VMs allowed may vary depending on the environment.
Be careful the datastore shared by multiple ESX hosts VMFS is a clustered file system and uses SCSI reservations as part of its distributed locking algorithms. Administrative operations, such as creating or deleting a virtual disk, extending a VMFS volume, or creating or deleting snapshots, result in metadata updates to the file system using locks, and thus result in SCSI reservations. A reservation causes the LUN to be available exclusively to a single ESX host for a brief period of time, thus impacts the VM performance.
The following items are required to deploy the Turbo NAS with VMware ESXi 5.0:
- One ESXi 5.0 host
- Three NIC ports on ESXi host
- Two Ethernet switches
- QNAP Turbo NAS TS-EC1279U-RP
Network configuration of ESXi host:
|vmnic||IP address/subnet mask||Remark|
|vmnic 0||10.8.12.28/23||Console management (not necessary)|
|vmnic 1||10.8.12.85/23||A dedicated interface for VM datastore|
|vmnic 2||18.104.22.168/16||A dedicated interface for VM data LUN|
Network configuration of TS-EC1279U-RP Turbo NAS :
|Network Interface||IP address/subnet mask||Remark|
|Ethernet 1||10.8.12.125/23||A dedicated interface for VM datastore|
|Ethernet 2||22.214.171.124/16||A dedicated interface for VM data LUN|
|Ethernet 3||Not used in this demonstration|
|Ethernet 4||Not used in this demonstration|
iSCSI configuration of TS-EC1279U-RP Turbo NAS:
|iSCSI Target||iSCSI LUN||Remark|
|DataTarget||DataLUN||To store VM data|
|VMTarget1||VMLUN1||For the extended VM datastore|
|VMTarget2||VMLUN2||For the extended VM datastore|
|VMTarget3||VMLUN3||For the extended VM datastore|
|VMTarget4||VMLUN4||For the extended VM datastore|
|A||0||To connect to Ethernet 1 of the Turbo NAS|
|A||1||To connect to vmnic 1 of the ESXi server|
|B||0||To connect to Ethernet 2 of the Turbo NAs|
|B||1||To connect vmnic 2 of the ESXi server|
Note: The iSCSI adapters should be on a private network.
Configure the network settings of the Turbo NAS
Login the web administration page of the Turbo NAS. Go to “System Administration” > “Network” > “TCP/IP”. Configure standalone network settings for Ethernet 1 and Ethernet 2.
- Ethernet 1 IP: 10.8.12.125
- Ethernet 2 IP: 126.96.36.199
Note: Enable “Balance-alb” bonding mode or 802.3ad aggregation mode (an 802.3ad compliant switch required) to allow inbound and outbound traffic link aggregation.
Create iSCSI targets with LUNs for the VM and its data on the NAS
Login the web administration page of the Turbo NAS. Go to “Disk Management” > “iSCSI” > “Target Management” and create five iSCSI targets, each with a instant allocation LUN (see Figure 6). VMLUNs (1-4) will be merged as an extended datastore to store your VM. DataLUN, 200 GB with instant allocation, will be used as the data storage for the VM.
For the details of creating iSCSI target and LUN on the Turbo NAS, please see the application note “Create and use the iSCSI target service on the QNAP NAS” on http://www.qnap.com/en/index.php?lang=en&sn=5319. Once the iSCSI targets and LUNs have been created on the Turbo NAS, use VMware vSphere Client to login the ESXi server.
Configure the network settings of the ESXi server
Run VMware vSphere Client and select the host. Under “Configuration” > “Hardware” > “Networking”, click “Add Networking” to add a vSwitch with a VMkernal Port (VMPath) for the VM datastore connection. The VM will use this iSCSI port to communicate with the NAS. The IP address of this iSCSI port is 10.8.12.85. Then, add another vSwitch with a VMkernal Port (DataPath) for the data connection of the VM. The IP address of this iSCSI port is 188.8.131.52.
Enable iSCSI software adapter in ESXi
Select the host. Under “Configuration” > “Hardware” > “Storage Adapters”, select “iSCSI Software Adapter”. Then click “Properties” in the “Details” panel.
Click “Configure” to enable the iSCSI software adapter.
Bind the iSCSI ports to the iSCSI adapter
Select the host. Under “Configuration” > “Hardware” > “Storage Adapters”, select “iSCSI Software Adapter”. Then click “Properties” in the “Details” panel. Go to the “Network Configuration” tab and then click “Add” to add the VMkernel ports: VMPath and VMdata.
Connect to the iSCSI targets
Select the host. Under “Configuration” > “Hardware” > “Storage Adapters”, select “iSCSI Software Adapter”. Then click “Properties” in the “Details” panel. Go to the “Dynamic Discovery” tab and then click “Add” to add one of your NAS IP address (10.8.12.125 or 184.108.40.206). Then click “Close” to rescan the software iSCSI bus adapter.
After rescanning the software iSCSI bus adapter, you can see the connected LUN in the “Details” panel.
Select the preferred path for each LUN
Right click each VMLUNs (1-4) and click “Manage Path…” to specify their paths.
Change Path Selection to “Fixed (VMware)”.
Then, select the dedicate path (10.8.12.125) and click “Preferred” to set the path for the VM connection.
Repeat the above steps to set up the preferred path (220.127.116.11) for DataLUN (200 GB).
Create and enable a datastore
Once the iSCSI targets have been connected, you can add your datastore on a LUN. Select the host. Under “Configuration” > “Hardware” > “Storage”, click “Add storage…” and select one of the VMLUNs (1 TB) to enable the new datastore (VMdatastore). After a few seconds, you will see the datastore in the ESXi server.
Merge other VMLUNs to the datastore
Select the host. Under “Configuration” > “Hardware” > “Storage”, right click VMdatastore and click “Properties”.
Select the other three VMLUNs and merge them as a datastore.
After all the VMLUNs are merged, the size of the VMdatastore will become 4 TB.
Create your VM and store it in the VM datastore
Right click the host to create a new virtual machine, select VMdatastore as its destination storage. Click “Next” and follow the wizard to create a VM.
Attach the DataLUN to the VM
Right click the VM that you just created, click “Edit Settings…” to add a new disk. Then, click “Next”.
Select “Raw Device Mappings” (RDM) and click “Next”.
Select DataLUN (200 GB) and click “Next”. Follow the wizard to add a new hard disk.
After a few seconds, a new hard disk will be added on your VM.
Your VM is ready to use
All the VM settings have been finished. Now start the VM and install your applications or save the access data in the RDM disk. If you wish to create another VM, please save the second VM to VMdatastore and create a new LUN on the Turbo NAS for its data access.
IX – If None of These Guides Works;
I – Introduction;
Warning : This documents are recomended for Professional users only. If you dont know what you’r doing, you may damage your RAID which cause loosing data. Qnapsupport Taiwan works great to solve this kind of RAID corruptions easly, and My advice is directly contact with them at this kind of cases.
1 – If documetn says “Plug out HDD and Plug in” always use original RAID HDD at the same slot. Dont use new HDDs!
2 – If you can access your data after these process, backup them quickly.
3 – You can loose data on Hardware RAID devices and other NAS brands, but nearly you cant loose data on Qnap, but recovering your datas may cause time lost, so Always double backup recomended for future cases.
4 – If one of your HDD’s cause Qnap to restart itself when you plug in to NAS, Dont use that HDD to solve these kind of cases.
This document is not valid for Qnap Ts-109 / Ts-209 / Ts-409 & Sub Models.
In this case, RAID information seems “RAID 5 Drive 1 3″ so, your 2.th HDD is out of RAID. Just plug out broken HDD from 2.Th HDD slot, wait around 15 seconds, and Plug in new HDD with the same size.
III -How to Fix if RAID seems “In Degreed Mode, Read Only, Failed Drive(s) x” Cases;
2 – If you plug out HDD —> then Plug it in the same slot again —> Press “Recover” Once Again and it should repair RAID;
3 – Here is log files;
4 – And RAID systems comes back with “Indegreed Mode”, so just quickly backup your datas, and Reinstall Qnap without broken HDD again.
We loose 2 HDD from RAID 5, so its impossable to fix this RAID again. Just Re-install.
V – “Recover” Doesnt Work, How to Fix if RAID Becomes “Unmounted” or “Not Active”
1 – Download Putty & login Qnap,
For RAID 5 with 4 HDD’s
mdadm -CfR –assume-clean /dev/md0 -l 5 -n 4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
VI – Stuck at Booting / Starting Sevices, Please Wait / Cant Even Login Qnap Interface
Just Plug out broken HDD and restart Qnap again. This should restarts Qnap back again.
If you’r not sure which HDD is broken, Please follw this steps;
1 – Power Off the NAS.
2 – Plug out All HDDs,
3 – Start Qnap without HDDs,
4 – Qnapfinder Should find Qnap in a few minutes. Now, Plug in all HDD’s back again same slots.
VII – IF “config_util 1″ Command gives “Mirror of Root Failed” Eror;
1 – Power Off the NAS.
2 – Plug out All HDDs,
3 – Start Qnap without HDDs,
4 – Qnapfinder Should find Qnap in a few minutes. Now, Plug in all HDD’s back again same slots.
5 – Download Putty from this link and login with admin / admin usernam / password;
6 – Type this command lines which I marked blue;
7 – Download Winscp and Login Qnap. Go to ->share ->MD0_DATA folder, and backup your datas Quickly.
VIII – How to Fix “Broken RAID” Scenario
“No server downtime when you need to replace the RAID drives”
- Logical volume status when the RAID operates normally
- When a drive fails, follow the steps below to check the drive status:
- Install a new drive to rebuild RAID 5 by hot swapping
Procedure of hot-swapping the hard drives when the RAID crashes
RAID 5 disk mirroring provides highly secure data protection. You can use two hard drives of the same capacity to create a RAID 5 array. The RAID 5 creates an exact copy of data on the member drives. RAID 5 protects data against single drive failure. The usable capacity of RAID 5 is the capacity of the smallest member drive. It is particularly suitable for personal or company use for important data saving.
When the RAID volume operates normally, the volume status is shown as Ready under ‘Disk Management’ > ‘Volume Management’ section.
When RAID volume operates normally, the volume status is shown as Ready in the Current Disk Volume Configuration section.
- The server beeps for 1.5 sec twice when the drive fails.
- The Status LED flashes red continuously.
- Check the Current Disk Volume Configuration section. The volume status is In degraded mode.
You can check the error and warning messages for drive failure and disk volume in degraded mode respectively in the Event Logs.
|Note: You can send and receive alert e-mail by configuring the alert notification. For the settings, please refer to the System Settings/ Alert Notification section in the user manual.|
Please follow the steps below to hot swap the failed hard drive:
- Prepare a new hard drive to rebuild RAID 5. The capacity of the new drive should be at least the same as that of the failed drive.
- Insert the drive to the drive slot of the server. The server beeps for 1.5 seconds twice. The Status LED flashes red and green alternatively.
- Check the Current Disk Volume Configuration section. The volume status is Rebuilding and the progress is shown.
- When the rebuilding is completed, the Status LED lights in green and the volume status is Ready. RAID 5 mirroring protection is now active.
- You can check the disk volume information in the Event Logs.
|Note: Do not install the new drive when the system is not yet in degraded mode to avoid unexpected system failure.|