Discussion:
how stable are snapshots at the block level?
(too old to reply)
Mathijs Kwik
2011-10-23 07:45:10 UTC
Permalink
Hi all,

I'm currently doing backups by doing a btrfs snapshot, then rsync the
snapshot to my backup location.
As I have a lot of small files and quite some changes between
snapshots, this process is taking more and more time.
I looked at "btrfs find-new", which is promissing, but I need
something to track deletes and modifications too.
Also, while this will help the initial comparison phase, most time is
still spent on the syncing itself, as a lot of overhead is caused by
the tiny files.

After finding some discussion about it here:
http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/backuppc-21/using-rsync-for-blockdevice-level-synchronisation-of-backupp-100438

I found that the official rsync-patches tarball includes the patch
that allows syncing full block devices.
After the initial backup, I found that this indeed speeds up my backups a lot.
Ofcourse this is meant for syncing unmounted filesystems (or other
things that are "stable" at the block level, like LVM snapshot
volumes).

I tested backing up a live btrfs filesystem by making a btrfs
snapshot, and this (very simple, non-thorough) turned out to work ok.
My root subvolume contains the "current" subvolume (which I mount) and
several backup subvolumes.
Ofcourse I understand that the "current" subvolume on the backup
destination is broken/inconsistent, as I change it during the rsync
run. But when I mounted the backup disk and compared the subvolumes
using normal file-by-file rsync, they were identical.

Can someone with knowledge about the on-disk structure please
confirm/reject that subvolumes (created before starting rsync on the
block device) should be safe and never move by themselves? Or was I
just lucky?
Are there any things that might break the backup when performed during rsync?
Like creating/deleting other subvolumes, probably defrag isn't a good
idea either :)

Or any incompatible mount options (compression, space_cache, ssd)

Thanks for any comments on this.
Mathijs
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Edward Ned Harvey
2011-10-23 14:08:27 UTC
Permalink
Post by Mathijs Kwik
I'm currently doing backups by doing a btrfs snapshot, then rsync the
snapshot to my backup location.
As I have a lot of small files and quite some changes between
snapshots, this process is taking more and more time.
I looked at "btrfs find-new", which is promissing, but I need
something to track deletes and modifications too.
Also, while this will help the initial comparison phase, most time is
still spent on the syncing itself, as a lot of overhead is caused by
the tiny files.
No word on when this will be available, but "btrfs send" or whatever it's going to be called, is currently in the works. This is really what you want.
Post by Mathijs Kwik
http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-
mailing-lists-3/backuppc-21/using-rsync-for-blockdevice-level-
synchronisation-of-backupp-100438
When you rsync at the file level, it needs to walk the directory structure, which is essentially a bunch of random IO. When you rsync at the block level, it needs to read the entire storage device sequentially. The latter is only a possible benefit, when the amount of time to walk the tree is significantly greater than the time to read the entire block device.

Even if you rsync the blocklevel device, the local rsync will have to read the entire block device to search for binary differences before sending. This will probably have the opposite effect from what you want - Because every time you created and deleted a file, every time you overwrote an existing block (copy on write) it still represents binary differences on disk, so even though that file was deleted, or several modifications all yielded a single modification in the end, all the bytes of all the deleted files and all the file deltas that were formerly occupied will be sent anyway. Unless you always zero them out, or something.

Given that you're talking about rsync'ing a block level device that contains btrfs, I'm assuming you have no raid/redundancy. And the receiving end is the same.

Also if you're rsyncing the block level device, you're running underneath btrfs and losing any checksumming benefit that btrfs was giving you, so you're possibly introducing risk for silent data corruption. (Or more accurately, failing to allow btrfs to detect/correct it.)
Post by Mathijs Kwik
I found that the official rsync-patches tarball includes the patch
that allows syncing full block devices.
After the initial backup, I found that this indeed speeds up my backups a lot.
Ofcourse this is meant for syncing unmounted filesystems (or other
things that are "stable" at the block level, like LVM snapshot
volumes).
Just guessing you did a minimal test. Send initial image, then make some changes, then send again. I don't expect this to be typical after a day or a week of usage, for the reasons previously described.
Post by Mathijs Kwik
I tested backing up a live btrfs filesystem by making a btrfs
snapshot, and this (very simple, non-thorough) turned out to work ok.
My root subvolume contains the "current" subvolume (which I mount) and
several backup subvolumes.
Ofcourse I understand that the "current" subvolume on the backup
destination is broken/inconsistent, as I change it during the rsync
run. But when I mounted the backup disk and compared the subvolumes
using normal file-by-file rsync, they were identical.
I may be wrong, but this sounds dangerous to me. As you've demonstrated, it will probably work a lot of the time - because the subvols and everything necessary to reference them are static on disk most of the time. But as soon as you write to any of the subvols - and that includes a scan, fsck, rebalance, defrag, etc. Anything that writes transparently behind the scenes as far as user processes are concerned... Those could break things.
Post by Mathijs Kwik
Thanks for any comments on this.
I suggest one of a few options:
(a) Stick with rsync at the file level. It's stable.
(b) Wait for btrfs send (or whatever) to become available
(c) Use ZFS. Both ZFS and BTRFS have advantages over one another. This an area where zfs has the advantage for now.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Mathijs Kwik
2011-10-23 15:19:42 UTC
Permalink
I'm currently doing backups by doing a btrfs snapshot, then rsync th=
e
snapshot to my backup location.
As I have a lot of small files and quite some changes between
snapshots, this process is taking more and more time.
I looked at "btrfs find-new", which is promissing, but I need
something to track deletes and modifications too.
Also, while this will help the initial comparison phase, most time i=
s
still spent on the syncing itself, as a lot of overhead is caused by
the tiny files.
No word on when this will be available, but "btrfs send" or whatever =
it's going to be called, is currently in the works. =C2=A0This is reall=
y what you want.
http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-
mailing-lists-3/backuppc-21/using-rsync-for-blockdevice-level-
synchronisation-of-backupp-100438
When you rsync at the file level, it needs to walk the directory stru=
cture, which is essentially a bunch of random IO. =C2=A0When you rsync =
at the block level, it needs to read the entire storage device sequenti=
ally. =C2=A0The latter is only a possible benefit, when the amount of t=
ime to walk the tree is significantly greater than the time to read the=
entire block device.

My test was just a 10G block device filled with random files between 51=
2b and 8k
While this is a contrived example, in this case a block level rsync is
way way way faster. It's not just the tree-walking that's slow, I
guess there's some per-file overhead too.When not using rsync but
plain dd, it's even faster (at the expense of more writes, even when
unneeded), since it can almost transfer data at the maximum write
speed for the receiver.
Even if you rsync the blocklevel device, the local rsync will have to=
read the entire block device to search for binary differences before s=
ending. =C2=A0This will probably have the opposite effect from what you=
want - Because every time you created and deleted a file, every time y=
ou overwrote an existing block (copy on write) it still represents bina=
ry differences on disk, so even though that file was deleted, or severa=
l modifications all yielded a single modification in the end, all the b=
ytes of all the deleted files and all the file deltas that were formerl=
y occupied will be sent anyway. =C2=A0Unless you always zero them out, =
or something.

I understand. A block copy is not advantageous in every situation. I'm
just trying to find out if it's possible for the situations where it
is beneficial.
Given that you're talking about rsync'ing a block level device that c=
ontains btrfs, I'm assuming you have no raid/redundancy. =C2=A0And the =
receiving end is the same.

Yup, in my example I synced my laptop ssd to an external disk (usb3).
Also if you're rsyncing the block level device, you're running undern=
eath btrfs and losing any checksumming benefit that btrfs was giving yo=
u, so you're possibly introducing risk for silent data corruption. =C2=A0=
(Or more accurately, failing to allow btrfs to detect/correct it.)

Not sure... I'm sure that's the case for in-use subvolumes, but
shouldn't snapshots (and their metadata/checksums) just be safe?
I found that the official rsync-patches tarball includes the patch
that allows syncing full block devices.
After the initial backup, I found that this indeed speeds up my back=
ups a lot.
Ofcourse this is meant for syncing unmounted filesystems (or other
things that are "stable" at the block level, like LVM snapshot
volumes).
Just guessing you did a minimal test. =C2=A0Send initial image, then =
make some changes, then send again. =C2=A0I don't expect this to be typ=
ical after a day or a week of usage, for the reasons previously describ=
ed.
I tested backing up a live btrfs filesystem by making a btrfs
snapshot, and this (very simple, non-thorough) turned out to work ok=
=2E
My root subvolume contains the "current" subvolume (which I mount) a=
nd
several backup subvolumes.
Ofcourse I understand that the "current" subvolume on the backup
destination is broken/inconsistent, as I change it during the rsync
run. But when I mounted the backup disk and compared the subvolumes
using normal file-by-file rsync, they were identical.
I may be wrong, but this sounds dangerous to me. =C2=A0As you've demo=
nstrated, it will probably work a lot of the time - because the subvols=
and everything necessary to reference them are static on disk most of =
the time. =C2=A0But as soon as you write to any of the subvols - and th=
at includes a scan, fsck, rebalance, defrag, etc. =C2=A0Anything that w=
rites transparently behind the scenes as far as user processes are conc=
erned... =C2=A0Those could break things.

I understand there are harmful operations, that's why I'm asking if it
is known exactly what those actions are. I'm not writing to the
snapshots (only to my "current" subvol) during rsync/dd and I make
sure not to rebalance or defrag (basically don't use any btrfs progs).
I understand that "current" will be corrupt on the backup destination,
but it would be great to know that all other subvolumes should be
safe.

=46or this case (my laptop) I can stick to file-based rsync, but I thin=
k
some guarantees should exist at the block level. Many virtual machines
and cloud hosting services (like ec2) provide block-level snapshots.
With xfs, I can freeze the filesystem for a short amount of time
(<100ms), snapshot, unfreeze. I don't think such a lock/freeze feature
exists for btrfs, but if btrfs guarantees all snapshots are stable as
long as you don't use any btrfs tools while snapping, it's not needed
either. Ofcourse I understand there's a difference between an instant
block snapshot and a dd/rsync session that takes a few minutes, but if
the dont-use-dangerous-operations conditions are met, it shouldn't
matter for snapshots that aren't used.

Also, I can see how future applications might want to use btrfs for
providing history, or other special purposes that they now write their
own b-tree code for. If the above holds true, block backups would have
no issues backing up this data, while file backups might lead to
enormous redundancy as files/blocks shared between multiple subvolumes
get unCOWed on the destination.
Thanks for any comments on this.
(a) Stick with rsync at the file level. =C2=A0It's stable.
(b) Wait for btrfs send (or whatever) to become available
(c) Use ZFS. =C2=A0Both ZFS and BTRFS have advantages over one anothe=
r. =C2=A0This an area where zfs has the advantage for now.

Thanks for your advice,
Like I said, for me, right now, sticking to tried-and-tested
file-based rsync is just ok. But I hope to get some insights into
other possibilities. btrfs send sounds cool, but I sure hope this is
not the only solution, as I described a few scenarios where
block-level copies have advantages.

Mathijs
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Stephane CHAZELAS
2011-10-24 12:36:28 UTC
Permalink
2011-10-23, 17:19(+02), Mathijs Kwik:
[...]
For this case (my laptop) I can stick to file-based rsync, but I think
some guarantees should exist at the block level. Many virtual machines
and cloud hosting services (like ec2) provide block-level snapshots.
With xfs, I can freeze the filesystem for a short amount of time
(<100ms), snapshot, unfreeze. I don't think such a lock/freeze feature
exists for btrfs
[...]

That FS-freeze feature has been moved to the vfs layer so is
available to any filesystem now.

You can either use xfs_io (see -F option to "freeze" for foreign
FS) like for xfs FS or use fsfreeze from util-linux.

Note that you can thaw file systems with a sysrq combination
now. (for instance with xen using "xm sysrq vm j").

For block level snapshots, see also ddsnap (device mapper target
unfortunately no longer maintained) and lvm of course (but
doesn't scale well with several snapshots).
--
Stephane

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Edward Ned Harvey
2011-10-24 13:59:32 UTC
Permalink
Sent: Sunday, October 23, 2011 11:20 AM
Post by Edward Ned Harvey
Also if you're rsyncing the block level device, you're running underneath
btrfs and losing any checksumming benefit that btrfs was giving you, so
you're possibly introducing risk for silent data corruption. (Or more
accurately, failing to allow btrfs to detect/correct it.)
Not sure... I'm sure that's the case for in-use subvolumes, but
shouldn't snapshots (and their metadata/checksums) just be safe?
Nope. The whole point of checksumming is like this: All devices are imperfect. They have built-in error detection and correction. Whenever an error occurs (which is often) the drive tries to silently correct it (reread) without telling the OS. But the checksumming in hardware is rather weak. Sometimes you'll get corrupt data that passes the hardware test and reaches the OS without any clue that it's wrong. I find that a typical small business fileserver (10 sata disks) hits these approx once a year.

Filesystem checksumming is much stronger (lower probability to silently allow an error). Like randomly selecting a single molecule twice consecutively amongst all the molecules in the solar system. Like much less likely to occur than the end of the human race, etc. So when the silent errors occur, filesystem checksumming definitely detects it, and if possible, corrects it.

If you are reading the raw device underneath btrfs, you are not getting the benefit of the filesystem checksumming. If you encounter an undetected read/write error, it will silently pass. Your data will be corrupted, you'll never know about it until you see the side-effects (whatever they may be).

While people with computers have accepted this level of unreliability for years (fat32, ntfs, ext3/4, etc) people are now beginning to recognize the importance on a greater scale. Once corrupted, always corrupted. People want to keep their data indefinitely.
Thanks for your advice,
Like I said, for me, right now, sticking to tried-and-tested
file-based rsync is just ok. But I hope to get some insights into
other possibilities. btrfs send sounds cool, but I sure hope this is
not the only solution, as I described a few scenarios where
block-level copies have advantages.
There is never a situation where block level copies have any advantage over something like btrfs send. Except perhaps forensics or espionage. But in terms of fast efficient reliable backups, btrfs send has every advantage and no disadvantage compared to block level copy.

There are many situations where btrfs send has an advantage over both block level and file level copies. It instantly knows all the relevant disk blocks to send, it preserves every property, it's agnostic about filesystem size or layout on either sending or receiving end, you have the option to create different configurations on each side, including compression etc. And so on.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Stephane CHAZELAS
2011-10-24 15:08:30 UTC
Permalink
2011-10-24, 09:59(-04), Edward Ned Harvey:
[...]
Post by Edward Ned Harvey
If you are reading the raw device underneath btrfs, you are
not getting the benefit of the filesystem checksumming. If
you encounter an undetected read/write error, it will silently
pass. Your data will be corrupted, you'll never know about it
until you see the side-effects (whatever they may be).
[...]

I don't follow you here. If you're cloning a device holding a
btrfs FS, you'll clone the checksums as well. If there were
errors, they will be detected on the cloned FS as well?
Post by Edward Ned Harvey
There is never a situation where block level copies have any
advantage over something like btrfs send. Except perhaps
forensics or espionage. But in terms of fast efficient
reliable backups, btrfs send has every advantage and no
disadvantage compared to block level copy.
$ btrfs send
ERROR: unknown command 'send'
Usage:
[...]

(from 2011-10-12 integration branch). Am I missing something?
Post by Edward Ned Harvey
There are many situations where btrfs send has an advantage
over both block level and file level copies. It instantly
knows all the relevant disk blocks to send, it preserves every
property, it's agnostic about filesystem size or layout on
either sending or receiving end, you have the option to create
different configurations on each side, including compression
etc. And so on.
[...]

That sounds like "zfs send", I didn't know btrfs had it yet.

My understanding was that to clone/backup a btrfs FS, you could
only clone the block devices or use the "device add" + "device
del" trick with some extra copy-on-write (LVM, nbd) layer.
--
Stephane

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Edward Ned Harvey
2011-10-25 11:46:48 UTC
Permalink
Post by Stephane CHAZELAS
[...]
Post by Edward Ned Harvey
If you are reading the raw device underneath btrfs, you are
not getting the benefit of the filesystem checksumming. If
you encounter an undetected read/write error, it will silently
pass. Your data will be corrupted, you'll never know about it
until you see the side-effects (whatever they may be).
[...]
I don't follow you here. If you're cloning a device holding a
btrfs FS, you'll clone the checksums as well. If there were
errors, they will be detected on the cloned FS as well?
You're right and I'm right. You will have read them, transferred them, and
written them without checking them. So any corruption at this point is
undetected. But later when you mount the destination FS, you would then be
checking checksums again.
Post by Stephane CHAZELAS
Post by Edward Ned Harvey
There is never a situation where block level copies have any
advantage over something like btrfs send. Except perhaps
forensics or espionage. But in terms of fast efficient
reliable backups, btrfs send has every advantage and no
disadvantage compared to block level copy.
$ btrfs send
ERROR: unknown command 'send'
[...]
(from 2011-10-12 integration branch). Am I missing something?
As previously mentioned in this thread, btrfs send (or whatever it will be
called) is not available yet.

My suggestion to the OP of this thread is to use rsync for now, wait for
btrfs send, or switch to zfs.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Stephane CHAZELAS
2011-10-25 12:01:17 UTC
Permalink
2011-10-25, 07:46(-04), Edward Ned Harvey:
[...]
Post by Edward Ned Harvey
My suggestion to the OP of this thread is to use rsync for now, wait for
btrfs send, or switch to zfs.
[...]

rsync won't work if you've got snapshot volumes though (unless
you're prepared to have a backup copy thousands of times the
size of the original or have a framework in place to replicate
the snapshots on the backup copy as soon as they are created
(but before they're being written to)).

To backup a btrfs FS with snapshots, the only option seems to
be to copy the block devices for now (or the other trick
mentionned earlier).
--
Stephane

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Edward Ned Harvey
2011-10-25 12:04:56 UTC
Permalink
Post by Stephane CHAZELAS
rsync won't work if you've got snapshot volumes though (unless
(etc blah)
Please read the OP. He is currently using rsync to backup his snapshots and
is not worried about the present state of the filesystem - only the
snapshots. He's looking for something faster.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Mason
2011-10-23 16:05:35 UTC
Permalink
Post by Mathijs Kwik
Hi all,
I'm currently doing backups by doing a btrfs snapshot, then rsync the
snapshot to my backup location.
As I have a lot of small files and quite some changes between
snapshots, this process is taking more and more time.
I looked at "btrfs find-new", which is promissing, but I need
something to track deletes and modifications too.
Also, while this will help the initial comparison phase, most time is
still spent on the syncing itself, as a lot of overhead is caused by
the tiny files.
http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/backuppc-21/using-rsync-for-blockdevice-level-synchronisation-of-backupp-100438
I found that the official rsync-patches tarball includes the patch
that allows syncing full block devices.
After the initial backup, I found that this indeed speeds up my backups a lot.
Ofcourse this is meant for syncing unmounted filesystems (or other
things that are "stable" at the block level, like LVM snapshot
volumes).
I tested backing up a live btrfs filesystem by making a btrfs
snapshot, and this (very simple, non-thorough) turned out to work ok.
My root subvolume contains the "current" subvolume (which I mount) and
several backup subvolumes.
Ofcourse I understand that the "current" subvolume on the backup
destination is broken/inconsistent, as I change it during the rsync
run. But when I mounted the backup disk and compared the subvolumes
using normal file-by-file rsync, they were identical.
Can someone with knowledge about the on-disk structure please
confirm/reject that subvolumes (created before starting rsync on the
block device) should be safe and never move by themselves? Or was I
just lucky?
Are there any things that might break the backup when performed during rsync?
Like creating/deleting other subvolumes, probably defrag isn't a good
idea either :)
The short answer is that you were lucky ;)

The big risk is the extent allocation tree is changing, and the tree of
tree roots is changing and so the result of the rsync isn't going to be
a fully consistent filesystem.

With that said, as long as you can mount it the actual files in the
snapshot are going to be valid. The only exceptions are if you've run a
filesystem balance or removed a drive during the rsync.

-chris



--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Mathijs Kwik
2011-10-23 16:41:20 UTC
Permalink
Post by Chris Mason
Post by Mathijs Kwik
Hi all,
I'm currently doing backups by doing a btrfs snapshot, then rsync th=
e
Post by Chris Mason
Post by Mathijs Kwik
snapshot to my backup location.
As I have a lot of small files and quite some changes between
snapshots, this process is taking more and more time.
I looked at "btrfs find-new", which is promissing, but I need
something to track deletes and modifications too.
Also, while this will help the initial comparison phase, most time i=
s
Post by Chris Mason
Post by Mathijs Kwik
still spent on the syncing itself, as a lot of overhead is caused by
the tiny files.
http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mail=
ing-lists-3/backuppc-21/using-rsync-for-blockdevice-level-synchronisati=
on-of-backupp-100438
Post by Chris Mason
Post by Mathijs Kwik
I found that the official rsync-patches tarball includes the patch
that allows syncing full block devices.
After the initial backup, I found that this indeed speeds up my back=
ups a lot.
Post by Chris Mason
Post by Mathijs Kwik
Ofcourse this is meant for syncing unmounted filesystems (or other
things that are "stable" at the block level, like LVM snapshot
volumes).
I tested backing up a live btrfs filesystem by making a btrfs
snapshot, and this (very simple, non-thorough) turned out to work ok=
=2E
Post by Chris Mason
Post by Mathijs Kwik
My root subvolume contains the "current" subvolume (which I mount) a=
nd
Post by Chris Mason
Post by Mathijs Kwik
several backup subvolumes.
Ofcourse I understand that the "current" subvolume on the backup
destination is broken/inconsistent, as I change it during the rsync
run. But when I mounted the backup disk and compared the subvolumes
using normal file-by-file rsync, they were identical.
Can someone with knowledge about the on-disk structure please
confirm/reject that subvolumes (created before starting rsync on the
block device) should be safe and never move by themselves? Or was I
just lucky?
Are there any things that might break the backup when performed duri=
ng rsync?
Post by Chris Mason
Post by Mathijs Kwik
Like creating/deleting other subvolumes, probably defrag isn't a goo=
d
Post by Chris Mason
Post by Mathijs Kwik
idea either :)
The short answer is that you were lucky ;)
That's why I only try this at home :)
Post by Chris Mason
The big risk is the extent allocation tree is changing, and the tree =
of
Post by Chris Mason
tree roots is changing and so the result of the rsync isn't going to =
be
Post by Chris Mason
a fully consistent filesystem.
Nope, I understand it's not fully consistent, but I'm hoping for
consistency for all subvols that weren't in use during the sync/dd.
Post by Chris Mason
With that said, as long as you can mount it the actual files in the
snapshot are going to be valid. =C2=A0The only exceptions are if you'=
ve run a
Post by Chris Mason
filesystem balance or removed a drive during the rsync.
Do I understand correctly that as long as I don't defrag/balance or
add/remove drives (my example is just about 1 drive though), there's a
chance the result isn't mountable, but if it _is_ mountable, all
subvolumes that weren't touched during the rsync/dd should be fine?
Or is there a chance that some files/dirs (in a snapshot volume) are
fine, but others are broken?

In other words: do I only need to check the destination to be
mountable afterwards or does that by itself mean not enough.

You mentioned 2 important trees
- tree of tree roots
- extent allocation tree

My root subvolume contains only subvolumes (no dirs/files), 1 of which
is mounted with -o subvol, the rest are snapshots.
Am I correct to assume the tree of tree roots doesn't change as long
as I don't create/remove subvols?

And for the extent allocation tree, can I assume that all changes to
extent allocation will be related to files/dirs changing on the
currently in-use subvolume? All extents that contain files in any of
the snapshots will still be there as changes to those files in
"current" will be COWed to new extents. So the risk is not that
extents are marked "free" when they aren't, but I might end up with
extents that are marked in-use while they are free.
As I expect "current" to become corrupt in the destination, I will
remove the subvolume there. Will that take care of the extent
allocation tree? Or will there still be extents marked "in use"
without any subvolume/dir/file pointing at it? If so, this is probably
something that the future fsck can deal with?
Post by Chris Mason
-chris
Thanks,
Mathijs
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Continue reading on narkive:
Loading...