Discussion:
Debian: how to move zfs-mount to early in the boot process?
Michael Kjörling
2013-12-05 09:08:35 UTC
Permalink
(I tried sending this yesterday, but for some reason it doesn't seem
to have made it. Sorry if it ends up being a duplicate.)



OK, this is one of those questions that are likely to, when answered,
hit me as "why didn't I think of that, stupid?". But I can't seem to
figure it out.

I am running Debian Wheezy with ZFS On Linux installed through the
'debian-zfs' 7~wheezy package and friends. And aside from the very
occasional hiccup that may or may not even be ZFS-related, it's
working splendidly.

However, despite (or perhaps because of) the fact that zfs-mount is
executed as S01 in runlevel 2, the ZFS file systems are mounted _very_
late during the boot process. It's early enough to not cause much
grief for starting software, but it's late enough that there's already
quite a few processes running and a fair bit of boot work has been
done by the time those file systems get mounted.

I want to move mounting ZFS file systems considerably earlier. To that
end, I tried editing /etc/init.d/zfs-mount to indicate:

# Default-Start: S 1 2 3 4 5
# Default-Stop: 0 6

rather than the distributed:

# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6

I also tried changing from:

# Required-Start: $local_fs
# Required-Stop: $local_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6

to:

# Required-Start: mountall
# Required-Stop:
# Default-Start: S
# Default-Stop: 0 6

When I then tried to execute 'update-rc.d -f zfs-mount' with either of
these changes made, it spewed out a whole series of warnings about
mismatched runlevels and didn't seem to do anything. Manually deleting
the /etc/rc?.d/???zfs-mount symlinks first appeared to have the same
effect. When I restored the two lines in the init script to what they
were like in the distributed package, the warnings went away.
Specifically, the warnings were very much along the lines of (these
copied from my most recent attempt):

update-rc.d: using dependency based boot sequencing
update-rc.d: warning: default start runlevel arguments (2 3 4 5) do not match zfs-mount Default-Start values (S)
update-rc.d: warning: default stop runlevel arguments (0 1 6) do not match zfs-mount Default-Stop values (0 6)
insserv: warning: current start runlevel(s) (2 3 4 5) of script `zfs-mount' overrides LSB defaults (S).
insserv: warning: current stop runlevel(s) (0 1 6) of script `zfs-mount' overrides LSB defaults (0 6).
insserv: warning: current start runlevel(s) (2 3 4 5) of script `zfs-mount' overrides LSB defaults (S).
insserv: warning: current stop runlevel(s) (0 1 6) of script `zfs-mount' overrides LSB defaults (0 6).
insserv: warning: current start runlevel(s) (2 3 4 5) of script `zfs-mount' overrides LSB defaults (S).
insserv: warning: current stop runlevel(s) (0 1 6) of script `zfs-mount' overrides LSB defaults (0 6).

What changes do I need to make in order to make ZFS file systems mount
as early as possible, _preferably together with rcS/mountall?_
--
Michael Kjörling • http://michael.kjorling.se • michael-/***@public.gmane.org
“People who think they know everything really annoy
those of us who know we don’t.” (Bjarne Stroustrup)

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Turbo Fredriksson
2013-12-05 10:09:16 UTC
Permalink
Post by Michael Kjörling
When I then tried to execute 'update-rc.d -f zfs-mount' with either of
these changes made, it spewed out a whole series of warnings about
mismatched runlevels and didn't seem to do anything.
Newer Debian GNU/Linux systems use insserv nowdays. I haven't yet
had the time to learn the new init system. But what I've learned
so far about insserv is that it's just complicated!

There's probably a good reason somewhere, but I just haven't managed
to figure out what...
Post by Michael Kjörling
Manually deleting the /etc/rc?.d/???zfs-mount symlinks first appeared
to have the same effect. When I restored the two lines in the init
script to what they were like in the distributed package, the warnings
went away.
Looking at the manpage of insserv, it seems like it keeps it's own
'database' on scripts etc...

/etc/insserv.conf
/etc/init.d/.depend.boot
/etc/init.d/.depend.start
/etc/init.d/.depend.stop
Post by Michael Kjörling
update-rc.d: warning: default start runlevel arguments (2 3 4 5) do not match zfs-mount Default-Start values (S)
update-rc.d: warning: default stop runlevel arguments (0 1 6) do not match zfs-mount Default-Stop values (0 6)
insserv: warning: current start runlevel(s) (2 3 4 5) of script `zfs-mount' overrides LSB defaults (S).
insserv: warning: current stop runlevel(s) (0 1 6) of script `zfs-mount' overrides LSB defaults (0 6).
insserv: warning: current start runlevel(s) (2 3 4 5) of script `zfs-mount' overrides LSB defaults (S).
insserv: warning: current stop runlevel(s) (0 1 6) of script `zfs-mount' overrides LSB defaults (0 6).
insserv: warning: current start runlevel(s) (2 3 4 5) of script `zfs-mount' overrides LSB defaults (S).
insserv: warning: current stop runlevel(s) (0 1 6) of script `zfs-mount' overrides LSB defaults (0 6).
Try forcing a remove first (with a pristine script)

update-rc.d -f zfs-mount remove

or perhaps

update-rc.d -f zfs-mount disable

and then try to run your update command again (after the changes).
--
Imagine you're an idiot and then imagine you're in
the government. Oh, sorry. Now I'm repeating myself
- Mark Twain

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Turbo Fredriksson
2013-12-05 10:16:14 UTC
Permalink
You might want to have it in /etc/rcS.d/ instead.

Considering that ZFS is both a 'raid system' (think MD) and a
volume manager (think LVM) and that 'mdadm-raid' (which sets up
the MD devices) starts at S07 in rcS.d (as does 'lvm2'), I'd put
zfs-mount there too for test.

I'm a little unsure on how to setup this, but just looking at the
manpage of update-rc.d, I _THINK_ (!!) that something like this
would work (after first successfully making sure that insserv knows
you'd removed the script from the rc[0-6].d runlevels (try the remove
cmd above):

update-rc.d zfs-mount start 07 S . stop 08 6 . stop 08 0

This _SHOULD_ (if I understand the manpage correctly), make sure
that zfs-mount:

1. RUNs at place 7 in runlevel S (which is 'start' or perhaps 'setup' :)
2. STOP at point 8 in runlevel 6 (which is 'reboot')
3. STOP at point 8 in runlevel 0 (which is 'halt').

You might want to try to add '-n' to this line to don't actually do
something, but just show what should be done. If this looks good,
then remove '-n' and run it for real...

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Sander Klein
2013-12-05 10:48:56 UTC
Permalink
Hi,
Post by Turbo Fredriksson
You might want to have it in /etc/rcS.d/ instead.
Considering that ZFS is both a 'raid system' (think MD) and a
volume manager (think LVM) and that 'mdadm-raid' (which sets up
the MD devices) starts at S07 in rcS.d (as does 'lvm2'), I'd put
zfs-mount there too for test.
I'm a little unsure on how to setup this, but just looking at the
manpage of update-rc.d, I _THINK_ (!!) that something like this
would work (after first successfully making sure that insserv knows
you'd removed the script from the rc[0-6].d runlevels (try the remove
update-rc.d zfs-mount start 07 S . stop 08 6 . stop 08 0
This _SHOULD_ (if I understand the manpage correctly), make sure
1. RUNs at place 7 in runlevel S (which is 'start' or perhaps 'setup' :)
2. STOP at point 8 in runlevel 6 (which is 'reboot')
3. STOP at point 8 in runlevel 0 (which is 'halt').
You might want to try to add '-n' to this line to don't actually do
something, but just show what should be done. If this looks good,
then remove '-n' and run it for real...
I think there are multiple solutions. First you could modify the LSB
header in the /etc/init.d/zfs-mount initscript. In my script it says:

### BEGIN INIT INFO
# Provides: zvol zfs zfs-mount
# Required-Start: $local_fs
# Required-Stop: $local_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Mount ZFS filesystems
# Description: Run the `zfs mount -a` or `zfs umount -a` command.
# This init script is deprecated and should be disabled in
the
# /etc/default/zfs options file. Instead, use the zfs-mount
# package for Debian or the zfs-mountall package for Ubuntu
### END INIT INFO

You could change the '# Default-Start: 2 3 4 5' to '# Default-Start: S'.
After that run insserv and the boot process will be updated to run this
script in the rc.S phase after the local fs is mounted.

Another possibility is adding the # X-Start-Before:' header followed by
the service which must be started after zfs is done.

A third possibility is modifying /etc/insserv.conf. It has the line
'$local_fs +mountall +mountall-bootclean +mountoverflowtmp +umountfs'
which could be modified to '$local_fs +mountall +mountall-bootclean
+mountoverflowtmp +umountfs +zvol +zfs +zfs-mount'.

Just remember yu have to run 'insserv' after every modification to
update the boot process.

Greets,

Sander

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Michael Kjörling
2013-12-06 18:28:16 UTC
Permalink
S'. After that run insserv and the boot process will be updated to
run this script in the rc.S phase after the local fs is mounted.
A third possibility is modifying /etc/insserv.conf. It has the line
'$local_fs +mountall +mountall-bootclean +mountoverflowtmp
+umountfs' which could be modified to '$local_fs +mountall
+mountall-bootclean +mountoverflowtmp +umountfs +zvol +zfs
+zfs-mount'.
I took a combined approach. After making the changes quoted above and
running 'insserv zfs-mount' from a clean slate, it looks like it's
working the way I wanted it to. The only odd thing left is that at
that point there seems to be some crud in /tmp, which prevents me from
putting that directory on the ZFS pool because setting a ZFS file
system mount point to /tmp causes the overall ZFS mount process to
report failure. That's not anywhere near as critical, however; I can
live with temp files living elsewhere.
--
Michael Kjörling • http://michael.kjorling.se • michael-/***@public.gmane.org
“People who think they know everything really annoy
those of us who know we don’t.” (Bjarne Stroustrup)

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Tren Blackburn
2013-12-06 18:41:14 UTC
Permalink
Post by Michael Kjörling
S'. After that run insserv and the boot process will be updated to
run this script in the rc.S phase after the local fs is mounted.
A third possibility is modifying /etc/insserv.conf. It has the line
'$local_fs +mountall +mountall-bootclean +mountoverflowtmp
+umountfs' which could be modified to '$local_fs +mountall
+mountall-bootclean +mountoverflowtmp +umountfs +zvol +zfs
+zfs-mount'.
I took a combined approach. After making the changes quoted above and
running 'insserv zfs-mount' from a clean slate, it looks like it's
working the way I wanted it to. The only odd thing left is that at
that point there seems to be some crud in /tmp, which prevents me from
putting that directory on the ZFS pool because setting a ZFS file
system mount point to /tmp causes the overall ZFS mount process to
report failure. That's not anywhere near as critical, however; I can
live with temp files living elsewhere.
Change the init script to add -O to your zfs mount -a command. This will
allow it to mount overtop a directory that has files in it.

For example, this is from the Debian zfs init script:

log_begin_msg "Mounting ZFS filesystems"
"$ZFS" mount -a -O
log_end_msg $?

Regards,

Tren

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Sander Klein
2013-12-06 19:14:17 UTC
Permalink
Post by Michael Kjörling
S'. After that run insserv and the boot process will be updated to
run this script in the rc.S phase after the local fs is mounted.
A third possibility is modifying /etc/insserv.conf. It has the line
'$local_fs +mountall +mountall-bootclean +mountoverflowtmp
+umountfs' which could be modified to '$local_fs +mountall
+mountall-bootclean +mountoverflowtmp +umountfs +zvol +zfs
+zfs-mount'.
I took a combined approach. After making the changes quoted above and
running 'insserv zfs-mount' from a clean slate, it looks like it's
working the way I wanted it to. The only odd thing left is that at
that point there seems to be some crud in /tmp, which prevents me from
putting that directory on the ZFS pool because setting a ZFS file
system mount point to /tmp causes the overall ZFS mount process to
report failure. That's not anywhere near as critical, however; I can
live with temp files living elsewhere.
What I do in that case is set the mountpoint to legacy and use
/etc/fstat to mount the /tmp partition.

So, a 'zfs set mountpoint=legacy <pool>/<dataset>' and putting something
like 'pool/sys_tmp /tmp zfs defaults 0 0' in /etc/fstab would do the
trick.

With this you can also undo the insserv trick because the pool will now
be initialized during the local_fs mount phase.

Greets,

Sander

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe-VKpPRiiRko7s4Z89Ie/***@public.gmane.org
Loading...