Discussion:
import the fc volume multiple times
(too old to reply)
Tamas Papp
2015-02-06 19:18:15 UTC
Permalink
hi,

Just a theoretical question.
Is it possible?

Let's say there is an SAN and two machine. One imports the volume
normally and the other one does it in read-only mode.

10x
tamas

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Durval Menezes
2015-02-06 19:22:10 UTC
Permalink
Hello Tamas,
Post by Tamas Papp
hi,
Just a theoretical question.
Is it possible?
Let's say there is an SAN and two machine. One imports the volume normally
and the other one does it in read-only mode.
AFAIK, you'd have to tell the read-only importer to force it with -f for
the import to succeed. Also, the read-only side will eventually see
corrupt/incomplete data as things are updated by the read-write side.

ZFS wasn't really made to be used that way...

Cheers,
--
Durval.
Post by Tamas Papp
10x
tamas
To unsubscribe from this group and stop receiving emails from it, send an
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Tamas Papp
2015-02-06 19:28:48 UTC
Permalink
Post by Durval Menezes
AFAIK, you'd have to tell the read-only importer to force it with -f
for the import to succeed. Also, the read-only side will eventually
see corrupt/incomplete data as things are updated by the read-write side.
Yes, this is my opinion too.
Post by Durval Menezes
ZFS wasn't really made to be used that way...
Right, I am just curious.


tamas

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Fajar A. Nugraha
2015-02-18 09:25:39 UTC
Permalink
Post by Tamas Papp
Post by Durval Menezes
AFAIK, you'd have to tell the read-only importer to force it with -f
for the import to succeed. Also, the read-only side will eventually
see corrupt/incomplete data as things are updated by the read-write side.
Yes, this is my opinion too.
You should be able to use device mapper to create snapshots (to be
reintegrated later - or thrown away, depending on r/w or r/o side) on
http://unix.stackexchange.com/questions/67678/gnu-linux-overlay-block-device-stackable-block-device
@Tamas: is there a particular use case you have in mind?

IMHO for some use cases, instead of sharing the pool vdevs, it's much
simpler to simply share zfs dataset (e.g. via nfs) or zvol (e.g. via
iscsi), and utilize zfs snapshot so that each client sees their own
private copy of the data.
--
Fajar

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Tamas Papp
2015-02-18 10:00:10 UTC
Permalink
Post by Fajar A. Nugraha
Post by Tamas Papp
Post by Durval Menezes
AFAIK, you'd have to tell the read-only importer to force it with -f
for the import to succeed. Also, the read-only side will eventually
see corrupt/incomplete data as things are updated by the read-write side.
Yes, this is my opinion too.
You should be able to use device mapper to create snapshots (to be
reintegrated later - or thrown away, depending on r/w or r/o side) on
http://unix.stackexchange.com/questions/67678/gnu-linux-overlay-block-device-stackable-block-device
@Tamas: is there a particular use case you have in mind?
IMHO for some use cases, instead of sharing the pool vdevs, it's much
simpler to simply share zfs dataset (e.g. via nfs) or zvol (e.g. via
iscsi), and utilize zfs snapshot so that each client sees their own
private copy of the data.
There was a topic on a local (Hungarian) forum with this
question...without zfs:)

I started thinking of the topic. Basically the guy who asked it solved
the actual issue via nfs, but was interested whether there is a
filesystem or solution available,
where all information is shared only with a local filesystem or devices
or something like that with no network connection between nodes.

But it's only a theoretical question.


Cheers,
tamas

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Gordan Bobic
2015-02-18 10:33:07 UTC
Permalink
You cannot have concurrent FS level access with ZFS (unless it is happening
over NFS or similar, with the usual caveats about staleness and locking).

From what you are describing, it sounds like GFS2 might be what you are
looking for. It is a cluster file system designed for concurrent access to
the same FS by multiple nodes with appropriate locking to ensure the files
don't get corrupted from concurrent access. There are things it won't work
for (e.g. most databases), but for regular file accessing applications it
works fine. Make sure you understand the performance implications of using
this kind of a solution before you deploy it, though. If locks don't need
to bounce between nodes, it can be very performant. If the locks are
bouncing between the machines on every file access things will slow down to
a crawl very quickly.

OCFS2 is another option similar to GFS2.
Post by Tamas Papp
Post by Fajar A. Nugraha
Post by Tamas Papp
Post by Durval Menezes
AFAIK, you'd have to tell the read-only importer to force it with -f
for the import to succeed. Also, the read-only side will eventually
see corrupt/incomplete data as things are updated by the read-write side.
Yes, this is my opinion too.
You should be able to use device mapper to create snapshots (to be
reintegrated later - or thrown away, depending on r/w or r/o side) on
http://unix.stackexchange.com/questions/67678/gnu-linux-
overlay-block-device-stackable-block-device
@Tamas: is there a particular use case you have in mind?
IMHO for some use cases, instead of sharing the pool vdevs, it's much
simpler to simply share zfs dataset (e.g. via nfs) or zvol (e.g. via
iscsi), and utilize zfs snapshot so that each client sees their own
private copy of the data.
There was a topic on a local (Hungarian) forum with this
question...without zfs:)
I started thinking of the topic. Basically the guy who asked it solved the
actual issue via nfs, but was interested whether there is a filesystem or
solution available,
where all information is shared only with a local filesystem or devices or
something like that with no network connection between nodes.
But it's only a theoretical question.
Cheers,
tamas
To unsubscribe from this group and stop receiving emails from it, send an
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Tamas Papp
2015-02-18 11:28:32 UTC
Permalink
Post by Gordan Bobic
You cannot have concurrent FS level access with ZFS (unless it is
happening over NFS or similar, with the usual caveats about staleness
and locking).
From what you are describing, it sounds like GFS2 might be what you
are looking for. It is a cluster file system designed for concurrent
access to the same FS by multiple nodes with appropriate locking to
ensure the files don't get corrupted from concurrent access. There are
things it won't work for (e.g. most databases), but for regular file
accessing applications it works fine. Make sure you understand the
performance implications of using this kind of a solution before you
deploy it, though. If locks don't need to bounce between nodes, it can
be very performant. If the locks are bouncing between the machines on
every file access things will slow down to a crawl very quickly.
OCFS2 is another option similar to GFS2.
AFAIK both, ocfs* and gfs* need network connection.

tamas

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Gordan Bobic
2015-02-18 11:31:55 UTC
Permalink
A logical network connection, yes. But if everything is only needed to
exist on a local host with multiple VMs you can bridge a dummy interface
and use that for internal comms between the VMs with no external network
access.
Post by Tamas Papp
Post by Gordan Bobic
You cannot have concurrent FS level access with ZFS (unless it is
happening over NFS or similar, with the usual caveats about staleness and
locking).
From what you are describing, it sounds like GFS2 might be what you are
looking for. It is a cluster file system designed for concurrent access to
the same FS by multiple nodes with appropriate locking to ensure the files
don't get corrupted from concurrent access. There are things it won't work
for (e.g. most databases), but for regular file accessing applications it
works fine. Make sure you understand the performance implications of using
this kind of a solution before you deploy it, though. If locks don't need
to bounce between nodes, it can be very performant. If the locks are
bouncing between the machines on every file access things will slow down to
a crawl very quickly.
OCFS2 is another option similar to GFS2.
AFAIK both, ocfs* and gfs* need network connection.
tamas
To unsubscribe from this group and stop receiving emails from it, send an
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Tamas Papp
2015-02-18 11:48:47 UTC
Permalink
Post by Gordan Bobic
A logical network connection, yes. But if everything is only needed to
exist on a local host with multiple VMs you can bridge a dummy
interface and use that for internal comms between the VMs with no
external network access.
I don't know the exact situation, but there are physical machines.
As far as I understand there is a central storage (DAS) and all machines
are connected to that.

There is one which writes and the rest are read the filesystem. The
question is why network is needed here.


IMHO there are so few cases, when this system would be really the winner
over a network based solution, that it would not be worth developing and
making work such and FS (or device?) properly.


tamas

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Durval Menezes
2015-02-18 12:26:06 UTC
Permalink
Hi Gregor,
Post by Tamas Papp
Post by Durval Menezes
AFAIK, you'd have to tell the read-only importer to force it with -f
for the import to succeed. Also, the read-only side will eventually
see corrupt/incomplete data as things are updated by the read-write side.
Yes, this is my opinion too.
You should be able to use device mapper to create snapshots (to be
reintegrated later - or thrown away, depending on r/w or r/o side) on
http://unix.stackexchange.com/questions/67678/gnu-linux-overlay-block-device-stackable-block-device

Intereresting, I did not know DM could do that.

Here's a more authoritative reference, straight from the horse's mouth:
https://www.kernel.org/doc/Documentation/device-mapper/snapshot.txt

In fact, according to it, the LVM2 snapshot functionality is built on top
of DM's.
Full disclosure: never tried in in this way (so no idea about
performance and stability), but in theory it could work...
Well, if it's similar to the LVM2 snapshots it serves as a basis for, I can
attest that performance-wise it used (kernel 2.6.2x) to suck big time...
never used it in production due to that. Can't say much about stability
either...

Cheers,
--
Durval.
Gregor
To unsubscribe from this group and stop receiving emails from it, send an
email to zfs-discuss+***@zfsonlinux.org.

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
Loading...