Jorge Gea
2014-05-23 13:00:20 UTC
Hi,
I'm working with the /proc/spl/kstat/zfs/arcstats of ZoL, and there are a
lot of things that I don't understand very well.
So, first question is: is there any documentation/manual where each of this
ARC values are well explained? I have looked for this at Oracle docs and
googling, but I haven't found anything accurate.
Now, some specific questions about it:
1. Looking only at this arcstats, is there a way to know if dedup tables
are fitting in RAM? I know how to calculate DDT sizes using zdb and zpool
status -D. But I would like to know a way to say if I have DDT at RAM or
not in an accurate way. I don't know what is the priority for the ARC (MRU,
MFU, DDTs, other metadata....)
2. Is there any way to know how much memory are beeing used by snapshots
and clones in an specific moment, also using this arcstats?
3. I have several systems with, theoretically, enaugh RAM to handle their
Deduplicated data. In general, 128Gb RAM for less than 9Tb of unique data.
I also have set the zfs_arc_max for using 120Gb and the zfs_arc_meta_limit
for using 60Gb. With this configuration systems should be happy, but in
some of them I have the arc_meta_used reaching the arc_meta_limit all the
time, So, sometimes the filesystem suffers performance issues. How could
this be explained?
Thanks for any help
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
I'm working with the /proc/spl/kstat/zfs/arcstats of ZoL, and there are a
lot of things that I don't understand very well.
So, first question is: is there any documentation/manual where each of this
ARC values are well explained? I have looked for this at Oracle docs and
googling, but I haven't found anything accurate.
Now, some specific questions about it:
1. Looking only at this arcstats, is there a way to know if dedup tables
are fitting in RAM? I know how to calculate DDT sizes using zdb and zpool
status -D. But I would like to know a way to say if I have DDT at RAM or
not in an accurate way. I don't know what is the priority for the ARC (MRU,
MFU, DDTs, other metadata....)
2. Is there any way to know how much memory are beeing used by snapshots
and clones in an specific moment, also using this arcstats?
3. I have several systems with, theoretically, enaugh RAM to handle their
Deduplicated data. In general, 128Gb RAM for less than 9Tb of unique data.
I also have set the zfs_arc_max for using 120Gb and the zfs_arc_meta_limit
for using 60Gb. With this configuration systems should be happy, but in
some of them I have the arc_meta_used reaching the arc_meta_limit all the
time, So, sometimes the filesystem suffers performance issues. How could
this be explained?
Thanks for any help
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org