Discussion:
zpool create: "Device or resource busy"?
Ulli Horlacher
2013-09-02 14:05:51 UTC
Permalink
I have added a new hard disk and created a new partition table:

***@vms3:~# fdisk -l /dev/sde

Disk /dev/sde: 2000.0 GB, 1999999336448 bytes
255 heads, 63 sectors/track, 243152 cylinders, total 3906248704 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0cc24132

Device Boot Start End Blocks Id System
/dev/sde1 2048 67110911 33554432 83 Linux
/dev/sde2 67110912 134219775 33554432 83 Linux
/dev/sde3 134219776 3906248703 1886014464 bf Solaris


But when I try to set up a new zpool for zfs I get:

***@vms3:~# zpool create -m /zfs/data data /dev/sde3
cannot open '/dev/sde3': Device or resource busy
cannot create 'data': one or more vdevs refer to the same device, or one of
the devices is part of an active md or lvm device

Why?

/dev/sde is completly new and unused, I do not have any md or lvm devices.

Old is:

***@vms3:~# ll /zfs/
drwxr-xr-x root root - 2013-08-03 23:38:27 /zfs/lxc
drwxr-xr-x root root - 2013-08-11 10:50:41 /zfs/test

***@vms3:~# zpool list -v
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
lxc 992G 63.0G 929G 6% 1.00x ONLINE -
243556e4a386c3176-part1 992G 63.0G 929G -
test 14.9G 11.2M 14.9G 0% 1.00x ONLINE -
scsi-3600508e000000000acd4127796c80b0f-part3 14.9G 11.2M 14.9G -
--
Ullrich Horlacher Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlacher-Jqd5/X81recL63KmMnjC+***@public.gmane.org
Universitaet Stuttgart Tel: ++49-711-68565868
Allmandring 30a Fax: ++49-711-682357
70550 Stuttgart (Germany) WWW: http://www.tik.uni-stuttgart.de/
REF:<20130902140551.GA13842-otB+qtk3XKcL63KmMnjC+***@public.gmane.org>

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Gordan Bobic
2013-09-02 14:09:47 UTC
Permalink
You really shouldn't be using /dev/sd? device nodes - they are not
deterministic between reboots, even if they appear to be at a glance (see
what happens if you remove a disk from the middle). Use
/dev/disk/by-id/wwn-* instead. It is possible you have a reference to
/dev/sde in your zpool.cache referring to another pool, or something like
that.



On Mon, Sep 2, 2013 at 3:05 PM, Ulli Horlacher <
Post by Ulli Horlacher
Disk /dev/sde: 2000.0 GB, 1999999336448 bytes
255 heads, 63 sectors/track, 243152 cylinders, total 3906248704 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0cc24132
Device Boot Start End Blocks Id System
/dev/sde1 2048 67110911 33554432 83 Linux
/dev/sde2 67110912 134219775 33554432 83 Linux
/dev/sde3 134219776 3906248703 1886014464 bf Solaris
cannot open '/dev/sde3': Device or resource busy
cannot create 'data': one or more vdevs refer to the same device, or one of
the devices is part of an active md or lvm device
Why?
/dev/sde is completly new and unused, I do not have any md or lvm devices.
drwxr-xr-x root root - 2013-08-03 23:38:27 /zfs/lxc
drwxr-xr-x root root - 2013-08-11 10:50:41 /zfs/test
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
lxc 992G 63.0G 929G 6% 1.00x ONLINE -
243556e4a386c3176-part1 992G 63.0G 929G -
test 14.9G 11.2M 14.9G 0% 1.00x ONLINE -
scsi-3600508e000000000acd4127796c80b0f-part3 14.9G 11.2M 14.9G
-
--
Ullrich Horlacher Informationssysteme und Serverbetrieb
Universitaet Stuttgart Tel: ++49-711-68565868
Allmandring 30a Fax: ++49-711-682357
70550 Stuttgart (Germany) WWW: http://www.tik.uni-stuttgart.de/
To unsubscribe from this group and stop receiving emails from it, send an
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Ulli Horlacher
2013-09-02 14:29:35 UTC
Permalink
Post by Gordan Bobic
You really shouldn't be using /dev/sd? device nodes - they are not
deterministic between reboots, even if they appear to be at a glance (see
what happens if you remove a disk from the middle). Use
/dev/disk/by-id/wwn-* instead.
It is possible you have a reference to /dev/sde in your zpool.cache
referring to another pool, or something like that.
***@vms3:~# l /dev/disk/by-id/ | grep sde3
lRWX - 2013-09-02 15:45 /dev/disk/by-id/scsi-3600508e0000000003f8903fd35c68a0c-part3 -> ../../sde3
lRWX - 2013-09-02 15:45 /dev/disk/by-id/wwn-0x600508e0000000003f8903fd35c68a0c-part3 -> ../../sde3

***@vms3:~# zpool create -m /zfs/data data /dev/disk/by-id/wwn-0x600508e0000000003f8903fd35c68a0c-part3
cannot open '/dev/disk/by-id/wwn-0x600508e0000000003f8903fd35c68a0c-part3': Device or resource busy
cannot create 'data': one or more vdevs refer to the same device, or one of
the devices is part of an active md or lvm device


Same error message as before.
--
Ullrich Horlacher Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlacher-Jqd5/X81recL63KmMnjC+***@public.gmane.org
Universitaet Stuttgart Tel: ++49-711-68565868
Allmandring 30a Fax: ++49-711-682357
70550 Stuttgart (Germany) WWW: http://www.tik.uni-stuttgart.de/
REF:<CAMx4oe3-AU_wMFaTQ5GXx9NuOaFP5SDVNBaLZPaHbJ9pyhOPgw-JsoAwUIsXosN+***@public.gmane.org>

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Jorge Gonzalez
2013-09-02 14:44:07 UTC
Permalink
Can you try the command "lsblk"? It shows all block devices and their
dependencies (i.e. when some blkdev is used by another: LUKS, LVM, MD,
etc.).

In my CentOS 6 system "lsblk" belongs to the util-linux.ng package...

BR
J.
Post by Ulli Horlacher
Post by Gordan Bobic
You really shouldn't be using /dev/sd? device nodes - they are not
deterministic between reboots, even if they appear to be at a glance (see
what happens if you remove a disk from the middle). Use
/dev/disk/by-id/wwn-* instead.
It is possible you have a reference to /dev/sde in your zpool.cache
referring to another pool, or something like that.
lRWX - 2013-09-02 15:45 /dev/disk/by-id/scsi-3600508e0000000003f8903fd35c68a0c-part3 -> ../../sde3
lRWX - 2013-09-02 15:45 /dev/disk/by-id/wwn-0x600508e0000000003f8903fd35c68a0c-part3 -> ../../sde3
cannot open '/dev/disk/by-id/wwn-0x600508e0000000003f8903fd35c68a0c-part3': Device or resource busy
cannot create 'data': one or more vdevs refer to the same device, or one of
the devices is part of an active md or lvm device
Same error message as before.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Ulli Horlacher
2013-09-02 15:28:55 UTC
Permalink
Post by Jorge Gonzalez
Can you try the command "lsblk"? It shows all block devices and their
dependencies (i.e. when some blkdev is used by another: LUKS, LVM, MD,
etc.).
***@vms3:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
(...)
sde 8:64 0 1.8T 0 disk
|-sde1 8:65 0 32G 0 part
|-sde2 8:66 0 32G 0 part
|-sde3 8:67 0 1.8T 0 part
`-3600508e0000000003f8903fd35c68a0c (dm-2) 252:2 0 1.8T 0 mpath

The multipath device-mapper is the problem!

Shall I use dm-2 as the zpool vdev ?
--
Ullrich Horlacher Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlacher-Jqd5/X81recL63KmMnjC+***@public.gmane.org
Universitaet Stuttgart Tel: ++49-711-68565868
Allmandring 30a Fax: ++49-711-682357
70550 Stuttgart (Germany) WWW: http://www.tik.uni-stuttgart.de/
REF:<5224A437.9080306-***@public.gmane.org>

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Jorge Gonzalez
2013-09-02 16:32:39 UTC
Permalink
Post by Ulli Horlacher
Post by Jorge Gonzalez
Can you try the command "lsblk"? It shows all block devices and their
dependencies (i.e. when some blkdev is used by another: LUKS, LVM, MD,
etc.).
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
(...)
sde 8:64 0 1.8T 0 disk
|-sde1 8:65 0 32G 0 part
|-sde2 8:66 0 32G 0 part
|-sde3 8:67 0 1.8T 0 part
`-3600508e0000000003f8903fd35c68a0c (dm-2) 252:2 0 1.8T 0 mpath
The multipath device-mapper is the problem!
Shall I use dm-2 as the zpool vdev ?
First you need to sort out the Multipath issue. Is the device a true
multipath device?

If you are not (consciously) using multipath, I'd recommend you disable
or uninstall it ("config multipathd off" on RH and similar)

If you _are_ using multipathing on purpose, you need to see why the
multipath system sees your disk twice.

Before disabling multipath, can you check "multipath -ll" please?

Thx
J.

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Ulli Horlacher
2013-09-02 17:29:49 UTC
Permalink
Post by Jorge Gonzalez
Post by Ulli Horlacher
Post by Jorge Gonzalez
Can you try the command "lsblk"? It shows all block devices and their
dependencies (i.e. when some blkdev is used by another: LUKS, LVM, MD,
etc.).
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
(...)
sde 8:64 0 1.8T 0 disk
|-sde1 8:65 0 32G 0 part
|-sde2 8:66 0 32G 0 part
|-sde3 8:67 0 1.8T 0 part
`-3600508e0000000003f8903fd35c68a0c (dm-2) 252:2 0 1.8T 0 mpath
The multipath device-mapper is the problem!
Shall I use dm-2 as the zpool vdev ?
First you need to sort out the Multipath issue. Is the device a true
multipath device?
No, it is a (new) local SATA disk. More precise: it is a RAID1 device
consisting of 2 SATA disks on a SAS controller. But Linux recognizes it as
one disk.
Post by Jorge Gonzalez
If you are not (consciously) using multipath, I'd recommend you disable
or uninstall it ("config multipathd off" on RH and similar)
I have true multipath SAN devices (via FC) on this server, too.
Therefore I need multipath, but not for /dev/sde
Post by Jorge Gonzalez
If you _are_ using multipathing on purpose, you need to see why the
multipath system sees your disk twice.
The disk /dev/sde is there only once, the SAN disks appear twice because of
the dualchannel FC adapter. So far, everything is ok.
Post by Jorge Gonzalez
Before disabling multipath, can you check "multipath -ll" please?
***@vms3:/etc# multipath -ll
243556e4a386c3176 dm-1 SCST_FIO,CUnJ8l1vXWaKKMwu
size=1000G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 3:0:0:2 sdd 8:48 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 2:0:0:2 sdb 8:16 active ready running
23176736668595731 dm-0 SCST_FIO,1vsfhYW1C28YbADT
size=2.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 2:0:0:0 sda 8:0 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 3:0:0:0 sdc 8:32 active ready running


These are the SAN disks only, after I have blacklisted the local SAS and
SATA disks.

This server is especially for zfs testing: I have zfs partitions on local
disks and on SAN devices, too. I also test RAID1 via disk controller and
via zfs.
--
Ullrich Horlacher Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlacher-Jqd5/X81recL63KmMnjC+***@public.gmane.org
Universitaet Stuttgart Tel: ++49-711-68565868
Allmandring 30a Fax: ++49-711-682357
70550 Stuttgart (Germany) WWW: http://www.tik.uni-stuttgart.de/
REF:<5224BDA7.9030103-***@public.gmane.org>

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Ulli Horlacher
2013-09-04 14:54:51 UTC
Permalink
Post by Ulli Horlacher
Post by Ulli Horlacher
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
(...)
sde 8:64 0 1.8T 0 disk
|-sde1 8:65 0 32G 0 part
|-sde2 8:66 0 32G 0 part
|-sde3 8:67 0 1.8T 0 part
No, it is a (new) local SATA disk. More precise: it is a RAID1 device
consisting of 2 SATA disks on a SAS controller. But Linux recognizes it as
one disk.
(...)
Post by Ulli Horlacher
This server is especially for zfs testing: I have zfs partitions on local
disks and on SAN devices, too. I also test RAID1 via disk controller and
via zfs.
After several write tests (*) I found the hardware RAID1 (sde) HORRIBLE
slow with zfs: about 1 MB/s!! With ext4 I have at least 30 MB/s

The same disks with zfs software RAID1 (mirror) give me 150 MB/s!

Then I tried it with (old) SAS disks, hardware RAID1: about 80 MB/s with zfs

I suppose the onboard LSI RAID controller is very bad in handling SATA
disks in RAID configuration.


(*) dd if=/dev/zero bs=1M count=1024 of=null.tmp
--
Ullrich Horlacher Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlacher-Jqd5/X81recL63KmMnjC+***@public.gmane.org
Universitaet Stuttgart Tel: ++49-711-68565868
Allmandring 30a Fax: ++49-711-682357
70550 Stuttgart (Germany) WWW: http://www.tik.uni-stuttgart.de/
REF:<20130902172949.GE15489-otB+qtk3XKcL63KmMnjC+***@public.gmane.org>

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Ulli Horlacher
2013-09-02 16:33:40 UTC
Permalink
Post by Ulli Horlacher
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
(...)
sde 8:64 0 1.8T 0 disk
|-sde1 8:65 0 32G 0 part
|-sde2 8:66 0 32G 0 part
|-sde3 8:67 0 1.8T 0 part
`-3600508e0000000003f8903fd35c68a0c (dm-2) 252:2 0 1.8T 0 mpath
The multipath device-mapper is the problem!
I found a workaround: Disabling multipathing for local disk sde

***@vms3:# cat /etc/multipath.conf
blacklist {
wwid 3600508e0000000003f8903fd35c68a0c
}

***@vms3:# zpool create -m /zfs/data data /dev/disk/by-id/scsi-3600508e0000000003f8903fd35c68a0c-part3

***@vms3:# zpool list -v data
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
data 1.75T 135K 1.75T 0% 1.00x ONLINE -
scsi-3600508e0000000003f8903fd35c68a0c-part3 1.75T 135K 1.75T -


Thanks for your hints!
--
Ullrich Horlacher Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlacher-Jqd5/X81recL63KmMnjC+***@public.gmane.org
Universitaet Stuttgart Tel: ++49-711-68565868
Allmandring 30a Fax: ++49-711-682357
70550 Stuttgart (Germany) WWW: http://www.tik.uni-stuttgart.de/
REF:<20130902152855.GC15489-otB+qtk3XKcL63KmMnjC+***@public.gmane.org>

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Jorge Gonzalez
2013-09-02 16:37:12 UTC
Permalink
Post by Ulli Horlacher
Post by Ulli Horlacher
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
(...)
sde 8:64 0 1.8T 0 disk
|-sde1 8:65 0 32G 0 part
|-sde2 8:66 0 32G 0 part
|-sde3 8:67 0 1.8T 0 part
`-3600508e0000000003f8903fd35c68a0c (dm-2) 252:2 0 1.8T 0 mpath
The multipath device-mapper is the problem!
I found a workaround: Disabling multipathing for local disk sde
blacklist {
wwid 3600508e0000000003f8903fd35c68a0c
}
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
data 1.75T 135K 1.75T 0% 1.00x ONLINE -
scsi-3600508e0000000003f8903fd35c68a0c-part3 1.75T 135K 1.75T -
Thanks for your hints!
OK to the workaround, but I'd spend a couple of minutes more
investigating why the multipath system sees your disk twice, if this is
not intended behaviour. You have sidestepped the problem for now but you
may have some misconfiguration somewhere which might bite you in the future.

BR
J.

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Ulli Horlacher
2013-09-02 14:48:22 UTC
Permalink
Post by Ulli Horlacher
cannot open '/dev/sde3': Device or resource busy
cannot create 'data': one or more vdevs refer to the same device, or one of
the devices is part of an active md or lvm device
Why?
/dev/sde is completly new and unused, I do not have any md or lvm devices.
I forgot... I have device-mapper installed:


***@vms3:~# fdisk -l /dev/mapper/3600508e0000000003f8903fd35c68a0c

Disk /dev/mapper/3600508e0000000003f8903fd35c68a0c: 2000.0 GB, 1999999336448 bytes
255 heads, 63 sectors/track, 243152 cylinders, total 3906248704 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0cc24132

Device Boot Start End Blocks Id System
/dev/mapper/3600508e0000000003f8903fd35c68a0c1 2048 67110911 33554432 83 Linux
/dev/mapper/3600508e0000000003f8903fd35c68a0c2 67110912 134219775 33554432 83 Linux
/dev/mapper/3600508e0000000003f8903fd35c68a0c3 134219776 3906248703 1886014464 bf Solaris

But there are no partition device files:

***@vms3:~# ll /dev/mapper/3600508e0000000003f8903fd35c68a0c*
lrwxrwxrwx root root - 2013-09-02 15:37:48 /dev/mapper/3600508e0000000003f8903fd35c68a0c -> ../dm-2

***@vms3:~# ll /dev/dm-2*
brw-rw---- root disk 252,002 2013-09-02 15:37:48 /dev/dm-2

***@vms3:~# zpool create -m /zfs/data data /dev/mapper/3600508e0000000003f8903fd35c68a0c3
cannot resolve path '/dev/mapper/3600508e0000000003f8903fd35c68a0c3'


Is device-mapper incompatble with zfs?
--
Ullrich Horlacher Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlacher-Jqd5/X81recL63KmMnjC+***@public.gmane.org
Universitaet Stuttgart Tel: ++49-711-68565868
Allmandring 30a Fax: ++49-711-682357
70550 Stuttgart (Germany) WWW: http://www.tik.uni-stuttgart.de/
REF:<20130902140551.GA13842-otB+qtk3XKcL63KmMnjC+***@public.gmane.org>

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Loading...