Recovering a Failed QNAP Raid Volume

How to recover data from QNAP drives using testdisk from SystemRescueCd


Given the following scenario:

QNAP server was factory reset, clearing the software RAID information on the QNAP OS.
As such, all drives in the RAID were essentially orphaned.
Data on the drives remained intact.

Recovery Options:

In order to recover the information, we could proceed via many troubleshooting pathways, two of which I list below:

– Rebuilding the software RAID
– Recovering the data directly from the drives

I chose the second option, since I wasn’t too handy with administration of the Linux Multiple Device Driver (MD), aka software RAID.
In this article, we will be recovering the data from ONE drive at a time, so it is best to plug in ONLY ONE of drives to be recovered, along with a spare drive on which the recovered data will be copied to.

Recovery Software:
We will be using SystemRescueCD to perform the data recovery

I assume the following:
You’ve already booted the SystemRescueCD
You either have console or ssh access (or whatever other means) to the SystemRescueCD shell
You have the drive to be recovered and a spare plugged in to your system

Lastly, this is key in Understanding QNAP volumes:
QNAP utilizes Logical Volume Management (LVM) and the Linux MD software RAID technologies to manage its storage devices.
Partition 3 Holds all the data on any given drive
Keep this in mind as you start digging for your data on the QNAP drives.

Identify the Destination Drive

Before going through the recovery, you must prep the directory on which you will be copying the recovered data to.
With the specs on your hard drive already in mind, issue the list hardware command (lshw) to determine the device name to the drive:
lshw -short -c disk
Once you match the device information to that of the spare drive, you can proceed to initialize (wipe/clean) the drive or mount it if it’s already prepared.

If the drive is already initialized, skip the next step, otherwise proceed …

Prepare the Destination Drive

You can initialize the drive for use on the SystemRescueCD as follows:
fdisk <device_name>, e.g. fdisk /dev/sda
Follow the prompts to create a Linux Partition
Note: Once the partition is created, the device you’ll actually be acting against is <device_namelogical_partition_number>, e.g. /dev/sda1
Once you’ve written the changes to the disk, you can proceed to create the filesystem on the drive:
mkfs -t <fs_type> <device_namelogical_partition_number>, e.g. mkfs -t ext4 /dev/sda1
mkfs.<fstype> <device_namelogical_partition_number>, e.g. mkfs.ext4 ext4 /dev/sda1
Once the filesystem has been created, you can mount it.
Do so first by creating a directory on which the drive will be mounted, e.g.:
mkdir /mnt/recovery

Mount the Destination Drive

Mounting the drive is quite easy, simply invoke the mount command, e.g.:
mount -t ext4 /dev/sda1 /mnt/recovery

Your destination drive is now ready to be used!

Identify the Data Partition on the Source Drive


The following commands are to be issued from the SystemRescueCD session:

First, we need to determine what MD volumes the SystemRescueCD has detected.
You can do so by displaying the contents of the mdstat file under /proc as follows:

cat /proc/mdstat

Samlpe Output:

        Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
        md321 : active raid1 sdb5[0]
              7168000 blocks super 1.0 [2/1] [U_]
              bitmap: 1/1 pages [4KB], 65536KB chunk
        md13 : active raid1 sdb4[25]
              458880 blocks super 1.0 [24/1] [_U______________________]
              bitmap: 1/1 pages [4KB], 65536KB chunk
        md2 : active raid1 sdb3[0]
              3897063616 blocks super 1.0 [1/1] [U]
        md256 : active raid1 sdb2[1]
              530112 blocks super 1.0 [2/1] [_U]
              bitmap: 0/1 pages [0KB], 65536KB chunk
        md9 : active raid1 sdb1[25]
              530048 blocks super 1.0 [24/1] [_U______________________]
              bitmap: 1/1 pages [4KB], 65536KB chunk

As you can see from the above output, there is a disk with a 3rd partition that is most likely an MD LVM volume.
I’d say there is a 90% chance that this is the drive and partition we’re interested in.

Take note of the device information, in this case /dev/sdb3

Invoke Testdisk Partiton Scan


So, again, we’ve detetermined the data to be on device /dev/sda3
The next step is to run testdisk against this device:
testdisk /dev/sdb3
In the ensuing dialog, choose the following order of actions:

Select a media ...: (choose the device, in this case /dev/sdb3)
Please select a partition table type ...: (choose EFI GPT)
Quick Search

At this point, the drive scan will commence.

Once it completes, you’ll be presented with a partition table as detected by testdisk.

List Files for Recovery & Copy


In the resulting partition table option, select the partition you think contains the data
Press shift + P
This will print the files on the partition
Read the instructions at the bottom of the file listing …

q to quit
: to select the current file
a to select all files
shift + C to copy the selected files
c to copy the current file

Once you invoke the copy action, you will be prompted to navigate to the destination path.

Hopefully you’ve already completed that in steps ‘Prepare the Destination Drive‘ and ‘Mount the Destination Drive

Once the copy process is started, you’ll be presented with a progress indication.

Sit tight. The wait is worth it.


[SMB] HOW-TO RECOVER data from LVM volume on a PC (UX-500P)