I know this thread is a bit old, but I'm also on OVH running 11.0-RELEASE-p15 and I get the error others have noted. One of my machines is quite happy to ping their suggested test address while the other returns with:
ping6 -c 4 2001:4860:4860::8888
PING6(56=40+8+8 bytes) 2607:5300:60:2db2:: -->...
Beeblebrox, thanks for your help. Mounting the system as read-only allowed me to read all the data from the volume. 100% of it was recovered and no files were corrupted. I've kept an image of the damaged filesystem so on my own time I can try and find/fix the bug in the kernel.
t1066:
I have tried this a number of times.
As soon as ZFS starts up the kernel panics. Looking at the backtrace it seems to happen as soon as the system tries to auto-scrub. I was looking at zpool scrub -s but I think it will be a race. As soon as the filesystem copy is done I'll take a few...
Please see post #25
zpool import -f -R /altroot 10433152746165646153 olddata
The only reason there are 2 pools named email is that one is the original that I'm trying to get the data off.
In process:
dd if=/dev/ad16s1g > zfsimage.dat
Hope I'm doing it right...
2 is done albeit a bit complicated since I have 2 pools named email.
zpool list
Does not show the other pool, but
zpool import
shows
pool: email
id: 10433152746165646153
state: ONLINE
status: The pool was last...
Thanks for your help. I'll take it up with them. In the meantime I'm dumping the filesystem to a file to test it like that. If it still croaks I'll sign up for FreeBSDCon and take it down there on an external drive.
Would you suggest I run zpool import -f 10433152746165646153
?
Nearly every command I've tried so far on this thing has resulted in a kernel panic. So far I've tried 8.3 32bit and 8.3 64bit and now 9.0 64 bit with very consistent panics.
Running import lists only 1 pool.
pool: email
id: 10433152746165646153
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-EY
config...
How do you tell zfs to look on that disk for the partition? If I do a zfs list here is what I get:
zfs list
NAME USED AVAIL REFER MOUNTPOINT
email 118M 356G 118M /email
That's my new email zfs filesystem living on /dev/ad0s1g.
As you can see from my feeble attempts above I...
I'm still having problems with this zfs thing. I upgraded to 9.0-release.
The replacement filesystem has a ZFS pool called email. I need to find a way to access the email zfs pool from the old disk which is installed in the system. It's on /dev/ad16s1g whereas the "good" zfs system is on...
It would appear that I'm running version 14. I thought I read somewhere that 8.3 was supposed to be version 28. I was planning to upgrade it to 9 so I'll try that and see if it's able to read the 32 bit filesystem that way.
Just for kicks I tried to rebuild the kernel (dual booted to 32 bit)...
The swap was on a slice. There was only 1 ZFS partition.
FreeBSD cl-t153-284cl 8.3-RELEASE FreeBSD 8.3-RELEASE #0: Mon Apr 9 21:23:18 UTC 2012
root@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64
zpool import -f -R /dev/ad16s1g email
Unexpected XML: name=stripesize data="0"...
We completely replaced the hardware. Same issue. I suspect it may be a bug in the ZFS system where it's unable to deal with some sort of filesystem corruption in the ZFS. There are now 2 drives in the system, one with FreeBSD 64 bit and the original. Any suggestions on how I might try to mount...
All of the smart tests passed. I have arranged to have a fresh hard drive installed with a fresh copy of 64 bit FreeBSD on it. I'll see if I can use that to read the ZFS partitions.
The kernel is a GENERIC 8.3-RELEASE-p1 as of May 4th 2012.
The only startup options are
vm.kmem_size="512M"
vm.kmem_size_max="512M"
vfs.zfs.arc_max="160M"
These were taken from the ZFSTuningGuide. I also did try recompiling the kernel with the KVA_PAGES options but it would panic...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.