Discussion:
Mounting(multiply)? Options(stored)? Options(barriers)?
Robert White
2014-10-23 22:27:06 UTC
Permalink
I've got several questions about mount features that I've been unable to
find definitive answers for.

ITEM: So there are some mount options that I'd like to be able to pin
onto a media like compress=lzo on a thumb drive I expect to get crowded.
Is there a feature equivalent to the -o option to tune2fs either present
or planned?

ITEM: Is there a means (or a plan for a means) to use a subvol as a
means to prevent/change active features from propigating to a
subdirectory? An example would be a means to turn off autodefrag or
compression for a subvolume full of virtual machine images, whilest
having it active for the bulk of the filesystem.

ITEM: If I make one file system and have subvols /__System /home and
/VMs and mount those as / /home and /usr/local/VMs respectively, with
differing feature options for each, will those options be separately
honored or will the last-mounted or first-mounted subvolume's options
take dominant effect? Compression, auto-defragment, and commit interval
being of primary concern.

ITEM: Is there a no-compress attribute (or something similar) for
negating compress= mount options on specific files or directories? How
about a no-autodefrag?

(I'm about to set up a system and some standards that may last many
years and I'm trying to find the reasonable bounds of where I need to do
hard partitioning.)

--Rob.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Duncan
2014-10-24 02:25:34 UTC
Permalink
Post by Robert White
I've got several questions about mount features that I've been unable to
find definitive answers for.
ITEM: So there are some mount options that I'd like to be able to pin
onto a media like compress=lzo on a thumb drive I expect to get crowded.
Is there a feature equivalent to the -o option to tune2fs either present
or planned?
Planned and already partially implemented, yes. The infrastructure and a
few options for it have already been implemented as btrfs properties --
via extended attributes. See the btrfs-property (8) manpage in a recent
btrfs-progs and play with the command a bit to see what's already
available. But AFAIK some planned properties remain to be implemented,
and some that are already there are only partially implemented.

It's possible to set the compression attribute on an individual file, for
instance, but last I knew, it wasn't yet possible to specify the type of
compression, so the default (either the mount default or if that's not
set either, the btrfs default, gzip) would be used.

And there has been some discussion of the semantics of specifically set
no-compress, vs. the attribute/property not existing for the file at all,
how inherited compression should work, etc. I believe that's already
hashed out, but last I knew, it wasn't yet coded up. (But I've not
upgraded since the progs 3.17 announcement and don't know if it changed
anything in that regard, yet.)

Also, since these btrfs properties are normally recorded as extended
attributes, the various non-btrfs utilities, chattr (from e2fsprogs),
etc, that deal with extended attributes, will need to be updated to deal
with the richer semantics, as well.
Post by Robert White
ITEM: Is there a means (or a plan for a means) to use a subvol as a
means to prevent/change active features from propigating to a
subdirectory? An example would be a means to turn off autodefrag or
compression for a subvolume full of virtual machine images, whilest
having it active for the bulk of the filesystem.
The idea is to eventually have that set by mount. If the subvolume is
separately mounted, some mount options will be (and some already are)
separately set for it. If you're accessing it like a normal subdir on
the parent subvolume, however, without separately mounting it, then the
mount options for the parent will apply.

Which gets interesting since it's then possible to mount a subvolume
separately in one location, with its own behavior, and access it from the
parent subvolume mounted elsewhere, with entirely different options, both
at the same time. Tho of course the Linux VFS has been dealing with this
to a limited extent for bind-mounts, where one view may be readonly and
another writable, for instance, for some time now.

As to what mount options can be set separately already, see the
discussion on the btrfs wiki. (Watch the link-wrap.)

https://btrfs.wiki.kernel.org/index.php/
FAQ#Can_I_mount_subvolumes_with_different_mount_options.3F
Post by Robert White
ITEM: If I make one file system and have subvols /__System /home and
/VMs and mount those as / /home and /usr/local/VMs respectively, with
differing feature options for each, will those options be separately
honored or will the last-mounted or first-mounted subvolume's options
take dominant effect? Compression, auto-defragment, and commit interval
being of primary concern.
See the discussion above. As for whether conflicting options error out,
get ignored, or update the whole filesystem, there has been some
discussion on the list but IDR the conclusion as it doesn't pertain to me
since I don't use subvolumes like that, preferring fully independent
filesystems on their own partitions, instead. (If the filesystem
metadata gets corrupted, it can easily mean the loss of all data on it.
Subvolumes provide little if any protection in that regard. I *STRONGLY*
prefer not to put all my data eggs in one filesystem basket, in case its
bottom falls out.) I believe in some cases either the conflicting mount
will error out or it'll mount but ignore the conflicts (IOW, it shouldn't
arbitrarily rewrite the option for the entire filesystem, that's what
remount is for!), but don't know if it actually works that way for
everything yet. Either watch for a response from someone with practical
knowledge of the situation, or do your own testing, before you depend on
it.
Post by Robert White
ITEM: Is there a no-compress attribute (or something similar) for
negating compress= mount options on specific files or directories? How
about a no-autodefrag?
Yes for no-compress, at least after they get everything setup as
discussed. I don't believe so for autodefrag, as that's mount-option
controlled only, AFAIK. But the plan is to let different subvolumes
mount with different autodefrag options, eventually. But I don't believe
it's implemented yet.
Post by Robert White
(I'm about to set up a system and some standards that may last many
years and I'm trying to find the reasonable bounds of where I need to do
hard partitioning.)
I guess my view on that should be obvious from the above comment. I
strongly prefer hard partitioning as I don't want to risk all my data
eggs being in the same filesystem basket when its bottom falls out! =:^)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Robert White
2014-10-24 04:07:33 UTC
Permalink
Post by Duncan
See the discussion above. As for whether conflicting options error out,
get ignored, or update the whole filesystem, there has been some
discussion on the list but IDR the conclusion as it doesn't pertain to me
since I don't use subvolumes like that, preferring fully independent
filesystems on their own partitions, instead. (If the filesystem
metadata gets corrupted, it can easily mean the loss of all data on it.
Subvolumes provide little if any protection in that regard. I *STRONGLY*
prefer not to put all my data eggs in one filesystem basket, in case its
bottom falls out.) I believe in some cases either the conflicting mount
will error out or it'll mount but ignore the conflicts (IOW, it shouldn't
arbitrarily rewrite the option for the entire filesystem, that's what
remount is for!), but don't know if it actually works that way for
everything yet. Either watch for a response from someone with practical
knowledge of the situation, or do your own testing, before you depend on
it.
Ouch, I abandoned multiple hard partitions on any one spindle a long,
long time ago. The failure modes likely to occur just don't justify the
restrictions and hassle. Let alone the competitive file-system
scheduling that can eat your system performance with a box of wine.

I've been in this mess since Unix System 3 release 4. The mythology of
the partitioned disk is deep and horrible. Ever since they stopped the
implementation of pipes as anonymous files on the root partition, most
of the reasoning ends up backward.

Soft failures are likely to spray the damage all over all the
filesystems by type, and a disk failure isn't going to obey the
partition boundaries.

Better the efficiency of the whole disk file-systems and a decent backup
plan.

Just my opinion.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Duncan
2014-10-24 06:46:27 UTC
Permalink
Post by Robert White
Ouch, I abandoned multiple hard partitions on any one spindle a long,
long time ago. The failure modes likely to occur just don't justify the
restrictions and hassle. Let alone the competitive file-system
scheduling that can eat your system performance with a box of wine.
I've been in this mess since Unix System 3 release 4. The mythology of
the partitioned disk is deep and horrible. Ever since they stopped the
implementation of pipes as anonymous files on the root partition, most
of the reasoning ends up backward.
Soft failures are likely to spray the damage all over all the
filesystems by type, and a disk failure isn't going to obey the
partition boundaries.
Better the efficiency of the whole disk file-systems and a decent backup
plan.
Just my opinion.
Of course JMHO as well, but that opinion is formed from a couple decades
of hard experience, now...

The "single egg basket" hardware device failure scenario is why I use
multiple identically partitioned devices setup with multiple independent
raid1s (originally mdraid1, now btrfs raid1) across those partitions for
most stuff, these days. Raid itself isn't backup, but software raid
along with hardware JBOD and standard-driver solutions such as AHCI,
means both hardware storage devices and the chipsets driving them can be
replaced as necessary, and I've actually done so.

So I do put multiple partitions including the working copy and primary
backup of the same data on the same hardware device (which may or may not
be a spindle, these days, my primaries are SSD, with the second-backups
and media drives being spinning rust), but the partitioning layout and
data on that device is raid-mirrored to a second device, identically
partitioned (using redundant and checksummed GPT, BTW), with separate
mdraid or now btrfs raid on each partition, so if one device fails, the
mirror provides the first level hardware backup.

And the partitions are typically small enough, my root partition
(including pretty much everything installed by the package manager,
including its tracking database) is only 8 GB and all partitions other
than the media partitions are under 50 GB, that I can keep multiple
copies on that same set of software-raided hardware. So I have an 8-gig
root, and another 8-gig rootbak, a 20-gig home and another 20-gig homebak,
with the second copies located rather further into the hardware devices,
after the first copy of all my other partitions.

Grub is installed to each hardware device in the set, and tested to load
from just the single hardware device. Similarly, the /boot partition
that grub points to on each device is independent, since grub can easily
point at only one per device. Of course I can select the hardware device
to boot, and thus the copy of grub, from the BIOS. (And when I update
grub or the /boot partition, I do it on one device at a time, testing
that the update still works before updating the other.)

And of course once I'm in grub I can adjust the kernel commandline root=
and similar parameters as necessary, and indeed, even have grub menu
options setup to do that so I don't have to do it at the grub cli.

But of course as you said, a kernel soft failure could scribble across
all partitions, mounted and unmounted alike, thus taking out that first
level backup along with the working copy. I've never had it happen but I
do run live-git pre-release kernels so I recognize the possibility.

Which is why the second level backup is to a totally different set of
devices. While my primary set of devices (and thus the working copy and
first backup) are SSD, using btrfs, my second set is spinning rust, using
reiserfs. That covers hardware device technology failure as well as
filesystem type failure. =:^)

And of course I can either bios-select the grub on the spinning rust or
boot to the normal grub and grub-select the spinning rust boot, my second-
level backup, as easily as I can the first-level backup.

Meanwhile, other than simple hardware failure taking out a full device,
my most frequent issues have all taken out individual partitions. That
includes one which was a heat-related head-crash due to A/C failure here
in Phoenix, in the middle of the summer. The room was at least 50C and
the drive was way hotter than that. But while I'm sure the platters were
physically grooved due to heat-related head-crash, after I shutdown and
everything cooled back down, partitions that weren't mounted were nearly
undamaged (an individual file damaged here or there, I suppose due to
random seeks across the unmounted partitions between operational
partitions before the CPU froze).

I actually booted and ran from the backup-root, backup-home, etc,
partitions on that damaged drive for a couple months, before I got the
money together to replace it with an upgrade. Just because the at the
time mounted partitions were probably physically grooved and pretty well
damaged beyond possibility of recovery, didn't mean the at the time
unmounted partitions were significantly damaged, and they weren't, as
demonstrated by the fact that I actually ran from them on the damaged
hardware for that long.

FWIW, there's normally unattached third-level backups of some data as
well, tho I don't as regularly update it, because I figure if the
disaster is big enough I'm resorting to that, it's likely a robbery or
fire or natural disaster, and I'll likely have bigger problems to worry
about, like simply surviving and finding another place to live, and won't
be too worried about the relatively minor issue of what happened to my
computer. After all, the *REAL* important backup is in my head, and of
course, if /that/ gets significantly damaged or destroyed, I think it's
safe to say I'm not going to be worrying about it or anything else for
awhile. =8^0 Gotta keep some real perspective on things, after all. =:^)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Loading...