Discussion:
zpool became unwriteable (it doesn't have "readonly" property set)
(too old to reply)
t***@gmail.com
2015-02-18 11:06:40 UTC
Permalink
hello.

environment: getnoo linux
=================================================
Linux host 3.14.33-gentoo #1 SMP Tue Feb 17 23:48:20 CET 2015 x86_64
Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz GenuineIntel GNU/Linux

[ebuild R ~] sys-kernel/spl-0.6.3-r1 USE="-custom-cflags -debug
-debug-log"
[ebuild R ~] sys-fs/zfs-kmod-0.6.3-r1 USE="-custom-cflags -debug
-rootfs"
[ebuild R ~] sys-fs/zfs-0.6.3-r2 USE="-bash-completion -custom-cflags
-debug (-kernel-builtin) -rootfs -static-libs -test-suite" PYTHON_TARGETS="python2_7
python3_3 -python3_4"

[ebuild R ] sys-libs/glibc-2.19-r1:2.2 USE="gd (multilib) -debug
(-hardened) -nscd -profile (-selinux) -suid -systemtap -vanilla"

gcc version 4.8.3 (Gentoo 4.8.3 p1.1, pie-0.5.9)
=================================================

after more then a year of use, yesterday my zpool got completely stuck for
writing.
the only thing I did which was "different" then normal use, was that I
removed (destroyed) two obsolete zfs file systems. commands were executed
without a problem, and I got my space back from them.

someting like 6 hours later, writes started hanging. there was no error, no
timeout, just infinite hang of the process.

I was not able to kill diskd process spawned by squid, so I had to reboot
the machine.
reboot failed, so I tried to force it (reboot -f).
forcing failed, so I had to reboot without a sync (reboot -fn).

since then, I cannot write anything to zpool nor any zfss it that hosts.
"readonly" property is "off" on the pool and the same is true for all zfs
file systems.
I do not use deduplication.
I do not use snapshots.
I do not use zfs for root fs.

if I try "zpool status" and similar commands, they are executed just fine.
if I try "zpool scrub pool0", command hangs infinitely.
dmesg/syslog are empty.

I've tried compiling kernel 3.14.33 instead of 3.14.28, but that did not
help neither.

is this some known issue and is there a way to solve it?
did maybe zfs cache file in etc got corrupted?
where could I dig in for more details/logs of zfs?

any tip or help would be welcome.

thx a lot.


zpool iostat pool0
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
pool0 29.7T 13.8T 18 0 1.19M 56

zpool iostat -v pool0
capacity
operations bandwidth
pool alloc free read
write read write
-------------------------------------------------- ----- ----- -----
----- ----- -----
pool0 29.7T 13.8T 17
0 1.10M 51
raidz2 29.7T 13.8T 17
0 1.10M 34
ata-WDC_WD30EFRX-68AX9N0_WD-WCC1T073521 - - 8
0 72.2K 16
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T225662 - - 8
0 72.4K 18
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T243189 - - 8
0 72.4K 22
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T270408 - - 8
0 72.2K 20
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T287400 - - 8
0 72.5K 20
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T323470 - - 8
0 72.3K 19
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T347548 - - 8
0 72.2K 21
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T360737 - - 8
0 72.3K 23
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T361593 - - 8
0 72.0K 23
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T363451 - - 8
0 71.9K 21
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T368830 - - 8
0 72.4K 19
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T376771 - - 8
0 72.7K 18
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T379696 - - 8
0 72.6K 20
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T388324 - - 8
0 72.5K 21
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T425581 - - 8
0 72.2K 18
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T425915 - - 8
0 72.3K 15
logs - - -
- - -
ata-INTEL_SSDSC2BB120G4_BTWL324501E9120LGN-part5 128K 944M 0
0 184 8
ata-INTEL_SSDSC2BB120G4_BTWL324202F0120LGN-part5 8K 944M 0
0 184 8
cache - - -
- - -
ata-INTEL_SSDSC2BB120G4_BTWL324501E9120LGN-part6 17.4M 27.9G 0
0 16 2.08K
ata-INTEL_SSDSC2BB120G4_BTWL324202F0120LGN-part6 16.6M 27.9G 0
0 16 1.99K
-------------------------------------------------- ----- ----- -----
----- ----- -----


zpool get all
NAME PROPERTY VALUE SOURCE
pool0 size 43.5T -
pool0 capacity 68% -
pool0 altroot - default
pool0 health ONLINE -
pool0 guid 9576311997480336172 default
pool0 version - default
pool0 bootfs - default
pool0 delegation on default
pool0 autoreplace off default
pool0 cachefile - default
pool0 failmode wait default
pool0 listsnapshots off default
pool0 autoexpand on local
pool0 dedupditto 0 default
pool0 dedupratio 1.00x -
pool0 free 13.8T -
pool0 allocated 29.7T -
pool0 readonly off -
pool0 ashift 12 local
pool0 comment - default
pool0 expandsize 0 -
pool0 freeing 0 default
pool0 leaked 0 default
pool0 ***@async_destroy active local
pool0 ***@empty_bpobj active local
pool0 ***@lz4_compress active local




To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
t***@gmail.com
2015-02-18 11:57:05 UTC
Permalink
hi, gregor.

thank you very much for the effort.

I destroyed file systems running kernel 3.14.28 and the same version of zfs
packages as now. before destruction of those file systems, everything was
running stable for more then a year, and with kernel 3.14.28 for more then
a month.

6 hours after the destruction, I noticed the problem with writing. maybe it
was not related to destruction, but I had no other unusual activities with
the pool.
after the reboot (forced without sync), zpool was still not writeable.

then I installed kernel 3.14.33 and compiled all needed packages
afterwards. modules are loaded without a problem:
zfs 1623438 29
zunicode 315376 1 zfs
zavl 3805 1 zfs
zcommon 29419 1 zfs
znvpair 37468 2 zfs,zcommon
spl 47647 5 zfs,zavl,zunicode,zcommon,znvpair

'zpool history pool0' command hangs with:
zpool history pool0
History for 'pool0':

and the 'zpool status -v pool0' gives:
zpool status -v pool0
pool: pool0
state: ONLINE
scan: scrub repaired 0 in 8h1m with 0 errors on Mon Feb 2 08:03:52 2015
config:

NAME STATE READ
WRITE CKSUM
pool0 ONLINE 0
0 0
raidz2-0 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WCC1T073521 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T225662 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T243189 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T270408 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T287400 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T323470 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T347548 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T360737 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T361593 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T363451 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T368830 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T376771 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T379696 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T388324 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T425581 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T425915 ONLINE 0
0 0
logs
ata-INTEL_SSDSC2BB120G4_BTWL324501E9120LGN-part5 ONLINE 0
0 0
ata-INTEL_SSDSC2BB120G4_BTWL324202F0120LGN-part5 ONLINE 0
0 0
cache
ata-INTEL_SSDSC2BB120G4_BTWL324501E9120LGN-part6 ONLINE 0
0 0
ata-INTEL_SSDSC2BB120G4_BTWL324202F0120LGN-part6 ONLINE 0
0 0

errors: No known data errors
Did you install a new kernel or did you update zfs directly before
destroying the filesystems?
Did you reboot after you experienced the problem?
Please attach the output of 'zpool status -v pool0' and 'zpool history
pool0'.
Gregor
hello.
environment: getnoo linux
=================================================
Linux host 3.14.33-gentoo #1 SMP Tue Feb 17 23:48:20 CET 2015 x86_64
[ebuild R ~] sys-kernel/spl-0.6.3-r1 USE="-custom-cflags -debug
-debug-log"
[ebuild R ~] sys-fs/zfs-kmod-0.6.3-r1 USE="-custom-cflags -debug
-rootfs"
[ebuild R ~] sys-fs/zfs-0.6.3-r2 USE="-bash-completion
-custom-cflags -debug (-kernel-builtin) -rootfs -static-libs -test-suite"
PYTHON_TARGETS="python2_7 python3_3 -python3_4"
[ebuild R ] sys-libs/glibc-2.19-r1:2.2 USE="gd (multilib) -debug
(-hardened) -nscd -profile (-selinux) -suid -systemtap -vanilla"
gcc version 4.8.3 (Gentoo 4.8.3 p1.1, pie-0.5.9)
=================================================
after more then a year of use, yesterday my zpool got completely stuck for
writing.
the only thing I did which was "different" then normal use, was that I
removed (destroyed) two obsolete zfs file systems. commands were executed
without a problem, and I got my space back from them.
someting like 6 hours later, writes started hanging. there was no error,
no timeout, just infinite hang of the process.
I was not able to kill diskd process spawned by squid, so I had to reboot
the machine.
reboot failed, so I tried to force it (reboot -f).
forcing failed, so I had to reboot without a sync (reboot -fn).
since then, I cannot write anything to zpool nor any zfss it that hosts.
"readonly" property is "off" on the pool and the same is true for all zfs
file systems.
I do not use deduplication.
I do not use snapshots.
I do not use zfs for root fs.
if I try "zpool status" and similar commands, they are executed just fine.
if I try "zpool scrub pool0", command hangs infinitely.
dmesg/syslog are empty.
I've tried compiling kernel 3.14.33 instead of 3.14.28, but that did not
help neither.
is this some known issue and is there a way to solve it?
did maybe zfs cache file in etc got corrupted?
where could I dig in for more details/logs of zfs?
any tip or help would be welcome.
thx a lot.
zpool iostat pool0
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
pool0 29.7T 13.8T 18 0 1.19M 56
zpool iostat -v pool0
capacity
operations bandwidth
pool alloc free read
write read write
-------------------------------------------------- ----- ----- -----
----- ----- -----
pool0 29.7T 13.8T 17
0 1.10M 51
raidz2 29.7T 13.8T 17
0 1.10M 34
ata-WDC_WD30EFRX-68AX9N0_WD-WCC1T073521 - - 8
0 72.2K 16
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T225662 - - 8
0 72.4K 18
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T243189 - - 8
0 72.4K 22
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T270408 - - 8
0 72.2K 20
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T287400 - - 8
0 72.5K 20
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T323470 - - 8
0 72.3K 19
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T347548 - - 8
0 72.2K 21
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T360737 - - 8
0 72.3K 23
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T361593 - - 8
0 72.0K 23
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T363451 - - 8
0 71.9K 21
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T368830 - - 8
0 72.4K 19
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T376771 - - 8
0 72.7K 18
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T379696 - - 8
...
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Cédric Lemarchand
2015-02-18 12:29:14 UTC
Permalink
Is there any activity on the pool ? (zpool iostat -v or iostat -x)
Could you export the pool, rename the cache file then import it ? Could
you remove SLOG devices ?
Post by t***@gmail.com
hi, gregor.
thank you very much for the effort.
I destroyed file systems running kernel 3.14.28 and the same version
of zfs packages as now. before destruction of those file systems,
everything was running stable for more then a year, and with kernel
3.14.28 for more then a month.
6 hours after the destruction, I noticed the problem with writing.
maybe it was not related to destruction, but I had no other unusual
activities with the pool.
after the reboot (forced without sync), zpool was still not writeable.
then I installed kernel 3.14.33 and compiled all needed packages
|
zfs 1623438 29
zunicode 315376 1zfs
zavl 3805 1zfs
zcommon 29419 1zfs
znvpair 37468 2zfs,zcommon
spl 47647 5zfs,zavl,zunicode,zcommon,znvpair
|
|
zpool history pool0
|
|
|
zpool status -v pool0
pool:pool0
state:ONLINE
scan:scrub repaired 0in8h1mwith0errors on MonFeb 208:03:522015
NAME STATE
READ WRITE CKSUM
pool0 ONLINE
0 0 0
raidz2-0 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WCC1T073521 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T225662 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T243189 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T270408 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T287400 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T323470 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T347548 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T360737 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T361593 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T363451 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T368830 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T376771 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T379696 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T388324 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T425581 ONLINE
0 0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T425915 ONLINE
0 0 0
logs
ata-INTEL_SSDSC2BB120G4_BTWL324501E9120LGN-part5 ONLINE
0 0 0
ata-INTEL_SSDSC2BB120G4_BTWL324202F0120LGN-part5 ONLINE
0 0 0
cache
ata-INTEL_SSDSC2BB120G4_BTWL324501E9120LGN-part6 ONLINE
0 0 0
ata-INTEL_SSDSC2BB120G4_BTWL324202F0120LGN-part6 ONLINE
0 0 0
errors:Noknown data errors
|
|
Did you install a new kernel or did you update zfs directly before
destroying the filesystems?
Did you reboot after you experienced the problem?
Please attach the output of 'zpool status -v |pool0|' and 'zpool
history |pool0|'.
Gregor
hello.
environment: getnoo linux
|
=================================================
Linuxhost 3.14.33-gentoo #1 SMP Tue Feb 17 23:48:20 CET 2015
GNU/Linux
[ebuild R ~]sys-kernel/spl-0.6.3-r1 USE="-custom-cflags
-debug -debug-log"
[ebuild R ~]sys-fs/zfs-kmod-0.6.3-r1 USE="-custom-cflags
-debug -rootfs"
[ebuild R ~]sys-fs/zfs-0.6.3-r2 USE="-bash-completion
-custom-cflags -debug (-kernel-builtin) -rootfs -static-libs
-test-suite"PYTHON_TARGETS="python2_7 python3_3 -python3_4"
[ebuild R ]sys-libs/glibc-2.19-r1:2.2 USE="gd (multilib)
-debug (-hardened) -nscd -profile (-selinux) -suid -systemtap
-vanilla"
gcc version 4.8.3(Gentoo4.8.3p1.1,pie-0.5.9)
=================================================
|
after more then a year of use, yesterday my zpool got
completely stuck for writing.
the only thing I did which was "different" then normal use,
was that I removed (destroyed) two obsolete zfs file systems.
commands were executed without a problem, and I got my space
back from them.
someting like 6 hours later, writes started hanging. there was
no error, no timeout, just infinite hang of the process.
I was not able to kill diskd process spawned by squid, so I
had to reboot the machine.
reboot failed, so I tried to force it (reboot -f).
forcing failed, so I had to reboot without a sync (reboot -fn).
since then, I cannot write anything to zpool nor any zfss it that hosts.
"readonly" property is "off" on the pool and the same is true
for all zfs file systems.
I do not use deduplication.
I do not use snapshots.
I do not use zfs for root fs.
if I try "zpool status" and similar commands, they are
executed just fine.
if I try "zpool scrub pool0", command hangs infinitely.
dmesg/syslog are empty.
I've tried compiling kernel 3.14.33 instead of 3.14.28, but
that did not help neither.
is this some known issue and is there a way to solve it?
did maybe zfs cache file in etc got corrupted?
where could I dig in for more details/logs of zfs?
any tip or help would be welcome.
thx a lot.
|
zpool iostat pool0
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
pool0 29.7T 13.8T 18 0 1.19M 56
|
|
zpool iostat -v pool0
capacity operations bandwidth
pool alloc
free read write read write
-------------------------------------------------- ----- ----- ----- ----- ----- -----
pool0
29.7T 13.8T 17 0 1.10M 51
raidz2
29.7T 13.8T 17 0 1.10M 34
ata-WDC_WD30EFRX-68AX9N0_WD-WCC1T073521 -
- 8 0 72.2K 16
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T225662 -
- 8 0 72.4K 18
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T243189 -
- 8 0 72.4K 22
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T270408 -
- 8 0 72.2K 20
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T287400 -
- 8 0 72.5K 20
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T323470 -
- 8 0 72.3K 19
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T347548 -
- 8 0 72.2K 21
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T360737 -
- 8 0 72.3K 23
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T361593 -
- 8 0 72.0K 23
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T363451 -
- 8 0 71.9K 21
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T368830 -
- 8 0 72.4K 19
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T376771 -
- 8 0 72.7K 18
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T379696 -
- 8
|
...
To unsubscribe from this group and stop receiving emails from it, send
--
Cédric Lemarchand
IT Infrastructure Manager
iXBlue
52, avenue de l'Europe
78160 Marly le Roi
France
Tel. +33 1 30 08 88 88
Mob. +33 6 37 23 40 93
Fax +33 1 30 08 88 00
www.ixblue.com <http://www.ixblue.com>

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
t***@gmail.com
2015-02-18 12:43:13 UTC
Permalink
hi, cedric.

there is some read activity in the pool, and when I try to read something
from it, I succeed.

zpool iostat -v
capacity
operations bandwidth
pool alloc free read
write read write
-------------------------------------------------- ----- ----- -----
----- ----- -----
pool0 29.7T 13.8T 9
0 645K 29
raidz2 29.7T 13.8T 9
0 645K 19
ata-WDC_WD30EFRX-68AX9N0_WD-WCC1T073521 - - 4
0 41.2K 9
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T225662 - - 4
0 41.3K 10
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T243189 - - 4
0 41.3K 12
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T270408 - - 4
0 41.2K 11
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T287400 - - 4
0 41.4K 11
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T323470 - - 4
0 41.3K 11
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T347548 - - 4
0 41.2K 12
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T360737 - - 4
0 41.3K 13
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T361593 - - 4
0 41.1K 13
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T363451 - - 4
0 41.1K 12
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T368830 - - 4
0 41.3K 11
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T376771 - - 4
0 41.5K 10
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T379696 - - 4
0 41.4K 11
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T388324 - - 4
0 41.4K 12
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T425581 - - 4
0 41.2K 10
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T425915 - - 4
0 41.2K 8
logs - - -
- - -
ata-INTEL_SSDSC2BB120G4_BTWL324501E9120LGN-part5 128K 944M 0
0 104 4
ata-INTEL_SSDSC2BB120G4_BTWL324202F0120LGN-part5 8K 944M 0
0 104 4
cache - - -
- - -
ata-INTEL_SSDSC2BB120G4_BTWL324501E9120LGN-part6 17.5M 27.9G 0
0 9 1.19K
ata-INTEL_SSDSC2BB120G4_BTWL324202F0120LGN-part6 16.6M 27.9G 0
0 9 1.13K
-------------------------------------------------- ----- ----- -----
----- ----- -----



zpool iostat -x
invalid option 'x'
usage:
iostat [-v] [-T d|u] [pool] ... [interval [count]]

I tried to remove one log device (they are not mirrored and they are almost
never used - no sync writes), but command hangs:
zpool remove pool0 ata-INTEL_SSDSC2BB120G4_BTWL324501E9120LGN-part5

maybe it's because content of that log first has to be written to the pool,
but pool fails to accept it.

is export, rename and import safe for the content of my pool?
I have too much data I would not like to lose there. :)

thx.


On Wednesday, February 18, 2015 at 1:29:21 PM UTC+1, Cédric Lemarchand
Post by Cédric Lemarchand
Is there any activity on the pool ? (zpool iostat -v or iostat -x)
Could you export the pool, rename the cache file then import it ? Could
you remove SLOG devices ?
hi, gregor.
thank you very much for the effort.
I destroyed file systems running kernel 3.14.28 and the same version of
zfs packages as now. before destruction of those file systems, everything
was running stable for more then a year, and with kernel 3.14.28 for more
then a month.
6 hours after the destruction, I noticed the problem with writing. maybe
it was not related to destruction, but I had no other unusual activities
with the pool.
after the reboot (forced without sync), zpool was still not writeable.
then I installed kernel 3.14.33 and compiled all needed packages
zfs 1623438 29
zunicode 315376 1 zfs
zavl 3805 1 zfs
zcommon 29419 1 zfs
znvpair 37468 2 zfs,zcommon
spl 47647 5 zfs,zavl,zunicode,zcommon,znvpair
zpool history pool0
zpool status -v pool0
pool: pool0
state: ONLINE
scan: scrub repaired 0 in 8h1m with 0 errors on Mon Feb 2 08:03:52 2015
NAME STATE READ
WRITE CKSUM
pool0 ONLINE 0
0 0
raidz2-0 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WCC1T073521 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T225662 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T243189 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T270408 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T287400 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T323470 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T347548 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T360737 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T361593 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T363451 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T368830 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T376771 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T379696 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T388324 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T425581 ONLINE 0
0 0
ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T425915 ONLINE 0
0 0
logs
ata-INTEL_SSDSC2BB120G4_BTWL324501E9120LGN-part5 ONLINE 0
0 0
ata-INTEL_SSDSC2BB120G4_BTWL324202F0120LGN-part5 ONLINE 0
0 0
cache
ata-INTEL_SSDSC2BB120G4_BTWL324501E9120LGN-part6 ONLINE 0
0 0
ata-INTEL_SSDSC2BB120G4_BTWL324202F0120LGN-part6 ONLINE 0
0 0
errors: No known data errors
Did you install a new kernel or did you update zfs directly before
destroying the filesystems?
Did you reboot after you experienced the problem?
Please attach the output of 'zpool status -v pool0' and 'zpool history
pool0'.
Gregor
hello.
environment: getnoo linux
=================================================
Linux host 3.14.33-gentoo #1 SMP Tue Feb 17 23:48:20 CET 2015 x86_64
[ebuild R ~] sys-kernel/spl-0.6.3-r1 USE="-custom-cflags -debug
-debug-log"
...
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Cédric Lemarchand
2015-02-18 14:44:07 UTC
Permalink
Post by t***@gmail.com
is export, rename and import safe for the content of my pool?
I have too much data I would not like to lose there. :)
Well, you have backups, right ? ;-)
--
Cédric Lemarchand
IT Infrastructure Manager
iXBlue
52, avenue de l'Europe
78160 Marly le Roi
France
Tel. +33 1 30 08 88 88
Mob. +33 6 37 23 40 93
Fax +33 1 30 08 88 00
www.ixblue.com <http://www.ixblue.com>

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
t***@gmail.com
2015-02-18 14:52:39 UTC
Permalink
well... kind of "no". :)

it's a storage for private use, not something business related, so I just
skipped connecting it to tape library because of cost-related reasons. :)
thought raidz2 would be enough for it.

I mean, I could theoretically still backup everything as I have read access
(I'm running some crucial backups right now), but it's not so easy to get
~30TB backed up around your flat. :D



On Wednesday, February 18, 2015 at 3:44:15 PM UTC+1, Cédric Lemarchand
Post by t***@gmail.com
is export, rename and import safe for the content of my pool?
I have too much data I would not like to lose there. :)
Well, you have backups, right ? ;-)
--
Cédric Lemarchand
IT Infrastructure Manager
iXBlue
52, avenue de l'Europe
78160 Marly le Roi
France
Tel. +33 1 30 08 88 88
Mob. +33 6 37 23 40 93
Fax +33 1 30 08 88 00
www.ixblue.com
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Cédric Lemarchand
2015-02-18 16:50:01 UTC
Permalink
Honestly I am far to be sure that it will help, keep it safe until
somebody get a smarter idea.

Cheers
Post by t***@gmail.com
well... kind of "no". :)
it's a storage for private use, not something business related, so I
just skipped connecting it to tape library because of cost-related
reasons. :)
thought raidz2 would be enough for it.
I mean, I could theoretically still backup everything as I have read
access (I'm running some crucial backups right now), but it's not so
easy to get ~30TB backed up around your flat. :D
On Wednesday, February 18, 2015 at 3:44:15 PM UTC+1, Cédric Lemarchand
Post by t***@gmail.com
is export, rename and import safe for the content of my pool?
I have too much data I would not like to lose there. :)
Well, you have backups, right ? ;-)
--
Cédric Lemarchand
IT Infrastructure Manager
iXBlue
52, avenue de l'Europe
78160 Marly le Roi
France
Tel. +33 1 30 08 88 88
Mob. +33 6 37 23 40 93
Fax +33 1 30 08 88 00
www.ixblue.com <http://www.ixblue.com>
--
Cédric Lemarchand
IT Infrastructure Manager
iXBlue
52, avenue de l'Europe
78160 Marly le Roi
France
Tel. +33 1 30 08 88 88
Mob. +33 6 37 23 40 93
Fax +33 1 30 08 88 00
www.ixblue.com <http://www.ixblue.com>

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Hajo Möller
2015-02-18 20:36:53 UTC
Permalink
Post by t***@gmail.com
thought raidz2 would be enough for it.
RAID (of any kind) is not backup. Also, not mirroring SLOG devices is
not recommended.

Anything in dmesg or syslog? What happens when you strace zpool history?
Interesting stack traces?
--
Regards,
Hajo Möller

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
t***@gmail.com
2015-02-18 21:51:05 UTC
Permalink
hi, hajo.

nothing zfs-related in dmesg and syslog. :(

strace of zpool history hangs on this:

strace zpool history pool0
execve("/sbin/zpool", ["zpool", "history", "pool0"], [/* 33 vars */]) = 0
brk(0) = 0xb25000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7ff4ff87b000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or
directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=51882, ...}) = 0
mmap(NULL, 51882, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7ff4ff86e000
close(3) = 0
open("/lib64/libnvpair.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340P\0\0\0\0\0\0"..., 832)
= 832
fstat(3, {st_mode=S_IFREG|0755, st_size=84800, ...}) = 0
mmap(NULL, 2180144, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7ff4ff447000
mprotect(0x7ff4ff45b000, 2093056, PROT_NONE) = 0
mmap(0x7ff4ff65a000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13000) = 0x7ff4ff65a000
close(3) = 0
open("/lib64/libuutil.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260Z\0\0\0\0\0\0"..., 832)
= 832
fstat(3, {st_mode=S_IFREG|0755, st_size=73424, ...}) = 0
mmap(NULL, 2173304, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7ff4ff234000
mprotect(0x7ff4ff245000, 2093056, PROT_NONE) = 0
mmap(0x7ff4ff444000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x10000) = 0x7ff4ff444000
mmap(0x7ff4ff446000, 2424, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7ff4ff446000
close(3) = 0
open("/lib64/libzpool.so.2", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0 U\2\0\0\0\0\0"...,
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1202696, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7ff4ff86d000
mmap(NULL, 4239672, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7ff4fee28000
mprotect(0x7ff4fef47000, 2093056, PROT_NONE) = 0
mmap(0x7ff4ff146000, 32768, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x11e000) = 0x7ff4ff146000
mmap(0x7ff4ff14e000, 938296, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7ff4ff14e000
close(3) = 0
open("/lib64/libzfs.so.2", O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\252\0\0\0\0\0\0"..., 832)
= 832
fstat(3, {st_mode=S_IFREG|0755, st_size=267488, ...}) = 0
mmap(NULL, 2362984, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7ff4febe7000
mprotect(0x7ff4fec26000, 2097152, PROT_NONE) = 0
mmap(0x7ff4fee26000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3f000) = 0x7ff4fee26000
close(3) = 0
open("/lib64/libblkid.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\220\206\0\0\0\0\0\0"...,
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=246416, ...}) = 0
mmap(NULL, 2345896, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7ff4fe9aa000
mprotect(0x7ff4fe9e2000, 2097152, PROT_NONE) = 0
mmap(0x7ff4febe2000, 16384, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x38000) = 0x7ff4febe2000
mmap(0x7ff4febe6000, 2984, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7ff4febe6000
close(3) = 0
open("/lib64/libuuid.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\200\25\0\0\0\0\0\0"...,
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=18728, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7ff4ff86c000
mmap(NULL, 2113936, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7ff4fe7a5000
mprotect(0x7ff4fe7a9000, 2093056, PROT_NONE) = 0
mmap(0x7ff4fe9a8000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7ff4fe9a8000
close(3) = 0
open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260\177\0\0\0\0\0\0"...,
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=140714, ...}) = 0
mmap(NULL, 2217104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7ff4fe587000
mprotect(0x7ff4fe5a0000, 2093056, PROT_NONE) = 0
mmap(0x7ff4fe79f000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x18000) = 0x7ff4fe79f000
mmap(0x7ff4fe7a1000, 13456, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7ff4fe7a1000
close(3) = 0
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\300N\2\0\0\0\0\0"..., 832)
= 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1712376, ...}) = 0
mmap(NULL, 3824728, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7ff4fe1e1000
mprotect(0x7ff4fe37e000, 2093056, PROT_NONE) = 0
mmap(0x7ff4fe57d000, 24576, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x19c000) = 0x7ff4fe57d000
mmap(0x7ff4fe583000, 15448, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7ff4fe583000
close(3) = 0
open("/lib64/librt.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\20)\0\0\0\0\0\0"..., 832) =
832
fstat(3, {st_mode=S_IFREG|0755, st_size=31600, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7ff4ff86b000
mmap(NULL, 2128920, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7ff4fdfd9000
mprotect(0x7ff4fdfe0000, 2093056, PROT_NONE) = 0
mmap(0x7ff4fe1df000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6000) = 0x7ff4fe1df000
close(3) = 0
open("/lib64/libz.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\360'\0\0\0\0\0\0"..., 832)
= 832
fstat(3, {st_mode=S_IFREG|0755, st_size=88488, ...}) = 0
mmap(NULL, 2183688, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7ff4fddc3000
mprotect(0x7ff4fddd8000, 2093056, PROT_NONE) = 0
mmap(0x7ff4fdfd7000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x14000) = 0x7ff4fdfd7000
close(3) = 0
open("/lib64/libzfs_core.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\20\22\0\0\0\0\0\0"..., 832)
= 832
fstat(3, {st_mode=S_IFREG|0755, st_size=14312, ...}) = 0
mmap(NULL, 2109840, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7ff4fdbbf000
mprotect(0x7ff4fdbc2000, 2093056, PROT_NONE) = 0
mmap(0x7ff4fddc1000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7ff4fddc1000
close(3) = 0
open("/lib64/libm.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0Pk\0\0\0\0\0\0"...,
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1018072, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7ff4ff86a000
mmap(NULL, 3113272, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7ff4fd8c6000
mprotect(0x7ff4fd9be000, 2093056, PROT_NONE) = 0
mmap(0x7ff4fdbbd000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xf7000) = 0x7ff4fdbbd000
close(3) = 0
open("/lib64/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\220\20\0\0\0\0\0\0"...,
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=14480, ...}) = 0
mmap(NULL, 2109720, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7ff4fd6c2000
mprotect(0x7ff4fd6c5000, 2093056, PROT_NONE) = 0
mmap(0x7ff4fd8c4000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7ff4fd8c4000
close(3) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7ff4ff869000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7ff4ff868000
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7ff4ff866000
arch_prctl(ARCH_SET_FS, 0x7ff4ff866b80) = 0
mprotect(0x7ff4fe57d000, 16384, PROT_READ) = 0
mprotect(0x7ff4fd8c4000, 4096, PROT_READ) = 0
mprotect(0x7ff4fdbbd000, 4096, PROT_READ) = 0
mprotect(0x7ff4fe9a8000, 4096, PROT_READ) = 0
mprotect(0x7ff4fe79f000, 4096, PROT_READ) = 0
mprotect(0x7ff4fe1df000, 4096, PROT_READ) = 0
mprotect(0x7ff4fdfd7000, 4096, PROT_READ) = 0
mprotect(0x7ff4ff444000, 4096, PROT_READ) = 0
mprotect(0x7ff4ff65a000, 4096, PROT_READ) = 0
mprotect(0x7ff4fddc1000, 4096, PROT_READ) = 0
mprotect(0x7ff4febe2000, 12288, PROT_READ) = 0
mprotect(0x7ff4ff146000, 8192, PROT_READ) = 0
mprotect(0x7ff4fee26000, 4096, PROT_READ) = 0
mprotect(0x61a000, 4096, PROT_READ) = 0
mprotect(0x7ff4ff87c000, 4096, PROT_READ) = 0
munmap(0x7ff4ff86e000, 51882) = 0
set_tid_address(0x7ff4ff866e50) = 11485
set_robust_list(0x7ff4ff866e60, 24) = 0
futex(0x7fff89699d78, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7fff89699d78, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1,
NULL, 7ff4ff866b80) = -1 EAGAIN (Resource temporarily unavailable)
rt_sigaction(SIGRTMIN, {0x7ff4fe58e9d0, [], SA_RESTORER|SA_SIGINFO,
0x7ff4fe598240}, NULL, 8) = 0
rt_sigaction(SIGRT_1, {0x7ff4fe58ea50, [],
SA_RESTORER|SA_RESTART|SA_SIGINFO, 0x7ff4fe598240}, NULL, 8) = 0
rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0
getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0
brk(0) = 0xb25000
brk(0xb46000) = 0xb46000
access("/sys/module/zfs", F_OK) = 0
open("/dev/zfs", O_RDWR) = 3
open("/etc/mtab", O_RDONLY) = 4
open("/etc/dfs/sharetab", O_RDONLY) = 5
open("/dev/zfs", O_RDWR) = 6
ioctl(3, 0x5a05, 0x7fff89692710) = 0
fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 3), ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7ff4ff87a000
write(1, "History for 'pool0':\n", 21History for 'pool0':
) = 21
mmap(NULL, 135168, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
= 0x7ff4ff845000
ioctl(3, 0x5a0a
Post by Hajo Möller
Post by t***@gmail.com
thought raidz2 would be enough for it.
RAID (of any kind) is not backup. Also, not mirroring SLOG devices is
not recommended.
Anything in dmesg or syslog? What happens when you strace zpool history?
Interesting stack traces?
--
Regards,
Hajo Möller
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
t***@gmail.com
2015-02-24 13:03:36 UTC
Permalink
OK, problem solved !

reverting to old zfs packages version made my zfs immediately completely
usable again.

I used gentoo's package

sys-fs/zfs-*0.6.3-r2 *

from December 2nd,

sys-fs/zfs-kmod-*0.6.3-r1 *

and

sys-kernel/spl-*0.6.3-r1 *

from December 1st without a problem.
unfortunately, it seems that some of those packages was responsible for my
problem, as switching back to

sys-fs/zfs-*0.6.3*
sys-fs/zfs-kmod-*0.6.3*
sys-kernel/spl-*0.6.3*

solved my problem and new I'm able to run zfs with both 3.14.28 and 3.14.33
kernels.

as this is gentoo-related issue, I will try to inform gentoo package
maintainer about it.

thx everybody!
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Loading...