![mac volume could not be unmounted mac volume could not be unmounted](https://i.stack.imgur.com/fEWcr.jpg)
- #Mac volume could not be unmounted how to
- #Mac volume could not be unmounted mac os x
- #Mac volume could not be unmounted update
Boot the Mac OS X CD (well, it happens by default now) 5. Run these commands: dd if/dev/zero of/dev/rdisk0 bs1k count1k pdisk /dev/rdisk0 -initialize reboot. Go into single-user mode (hold Option-S) 3.
#Mac volume could not be unmounted how to
mount -t ext4 /dev/mapper/cachedev1 /share/CACHEDEV1_DATAĪfter this you may need to re-start your NAS - the file system should work again.įor further reading, the QNAP forums have a wealth of information shared by some very knowledgable users. How to clear your entire primary drive and get it ready for repartitioning: 1. Then you can try and re-mount the file system. I have 7 /dev/mapper/cachedev mount points and problem is with 3 of them. Failed to check file system of volume 'VolQsync'. Then check again: e2fsck_64 -fp -C 0 /dev/mapper/cachedev1 10:48:02 System 127.0.0.1 Storage & Snapshots Volume Storage & Snapshots File system not clean. This repairs the inconsistent file system. Then run a file system check: e2fsck_64 -f /dev/mapper/cachedev1 Of course, this may not resolve your specific issues, but if this fails you may have bigger problems, like a corrupt swap partition or something that may take some more work to reconstruct.Īfter researching some other peoples solutions, there are some other steps you may be able to take that could help with your raid volumes.įirst, check if the RAID is online: md_checker Now, mount the device: # mount -t ext4 /dev/md1 /share/MD1_DATA
#Mac volume could not be unmounted update
some parts of the process don't update you on the status. Run a manual file system check and repair first: # e2fsck_64 -fp -C 0 /dev/md1ĭepending on the size of your volume this could potentially take quite some time - about 4 hours in the case of our 12TB RAID-5 volume. Trying to manually mount failed, but it turns out there's a simple solution. dev/md9 509.5M 121.5M 388.0M 24% /mnt/HDA_ROOTĭf: /share/external/UHCI Host Controller: Protocol error Using df -h (a command to check disk space/usage), we can see the /dev/md0 volume is ok, but the `/dev/md1' volume is gone: # df -hįilesystem Size Used Available Use% Mounted on UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:514de4deīoth volumes seem to still exist, and the RAID disk info is intact. UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:513ee963 Output of mdadm -detail.: # mdadm -detail /dev/md0 The QNAP NAS boxes run a custom linux implementation based on Ubuntu (linux kernel 2.6).įirstly, run mdadm - this command is used to manage and monitor software RAID devices in linux. In this situation, a knowledge of *nix will go a long way to helping you inderstand what the particular problems you are facing entail. So, we open a terminal and SSH into the NAS in order to run a few diagnostic commands and see what's going on. To our dismay, when it powered back up, only one of the two RAID volumes was available. The device itself has two RAID-5 volumes of four disks each.Īfter a power outage that went beyond the time of UPS' ability to keep things running, the unit shut down. We recently suffered a strange issue with one of our NAS devices - a QNAP 8-bay NAS.