l***@gmail.com
2015-02-23 17:24:56 UTC
It looks like zfs send is a highly serial transaction.
I have been doing some tests with ZFS send/recv. Initially I thought the
bottleneck was the 1GIG network interfaces. But after several tests with
SSH, SCP, MBUFFER and NETCAT, I have ruled out network or transport
protocol errors.
After carefully analysing IOSTAT results, I have noticed that ZFS send
builds the stream at no more than '1' operation at a time. I believe this
is the bottleneck
I am also using enterprise SSDs and those disks are not the bottleneck
either.
- Has anyone seen this kind of performance?
- Thoughts on doing a parallel tree walk to facilitate the zfs send as a
parallel operation
I have been trying to figure out how to parallelize this cleanly but
additional insights would be appreciated,
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.
I have been doing some tests with ZFS send/recv. Initially I thought the
bottleneck was the 1GIG network interfaces. But after several tests with
SSH, SCP, MBUFFER and NETCAT, I have ruled out network or transport
protocol errors.
After carefully analysing IOSTAT results, I have noticed that ZFS send
builds the stream at no more than '1' operation at a time. I believe this
is the bottleneck
I am also using enterprise SSDs and those disks are not the bottleneck
either.
- Has anyone seen this kind of performance?
- Thoughts on doing a parallel tree walk to facilitate the zfs send as a
parallel operation
I have been trying to figure out how to parallelize this cleanly but
additional insights would be appreciated,
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+***@zfsonlinux.org.