1
0
Fork 0

btrfs: don't force mounts to wait for cleaner_kthread to delete one or more subvolumes

During a mount, we start the cleaner kthread first because the transaction
kthread wants to wake up the cleaner kthread.  We start the transaction
kthread next because everything in btrfs wants transactions.  We do reloc
recovery in the thread that was doing the original mount call once the
transaction kthread is running.  This means that the cleaner kthread
could already be running when reloc recovery happens (e.g. if a snapshot
delete was started before a crash).

Relocation does not play well with the cleaner kthread, so a mutex was
added in commit 5f3164813b "Btrfs: fix
race between balance recovery and root deletion" to prevent both from
being active at the same time.

If the cleaner kthread is already holding the mutex by the time we get
to btrfs_recover_relocation, the mount will be blocked until at least
one deleted subvolume is cleaned (possibly more if the mount process
doesn't get the lock right away).  During this time (which could be an
arbitrarily long time on a large/slow filesystem), the mount process is
stuck and the filesystem is unnecessarily inaccessible.

Fix this by locking cleaner_mutex before we start cleaner_kthread, and
unlocking the mutex after mount no longer requires it.  This ensures
that the mounting process will not be blocked by the cleaner kthread.
The cleaner kthread is already prepared for mutex contention and will
just go to sleep until the mutex is available.

Signed-off-by: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
hifive-unleashed-5.1
Zygo Blaxell 2016-05-05 00:23:49 -04:00 committed by David Sterba
parent 58d7bbf81f
commit 2f3165ecf1
1 changed files with 15 additions and 3 deletions

View File

@ -2517,6 +2517,7 @@ int open_ctree(struct super_block *sb,
int num_backups_tried = 0;
int backup_index = 0;
int max_active;
bool cleaner_mutex_locked = false;
tree_root = fs_info->tree_root = btrfs_alloc_root(fs_info, GFP_KERNEL);
chunk_root = fs_info->chunk_root = btrfs_alloc_root(fs_info, GFP_KERNEL);
@ -2997,6 +2998,13 @@ retry_root_backup:
goto fail_sysfs;
}
/*
* Hold the cleaner_mutex thread here so that we don't block
* for a long time on btrfs_recover_relocation. cleaner_kthread
* will wait for us to finish mounting the filesystem.
*/
mutex_lock(&fs_info->cleaner_mutex);
cleaner_mutex_locked = true;
fs_info->cleaner_kthread = kthread_run(cleaner_kthread, tree_root,
"btrfs-cleaner");
if (IS_ERR(fs_info->cleaner_kthread))
@ -3056,10 +3064,8 @@ retry_root_backup:
ret = btrfs_cleanup_fs_roots(fs_info);
if (ret)
goto fail_qgroup;
mutex_lock(&fs_info->cleaner_mutex);
/* We locked cleaner_mutex before creating cleaner_kthread. */
ret = btrfs_recover_relocation(tree_root);
mutex_unlock(&fs_info->cleaner_mutex);
if (ret < 0) {
printk(KERN_WARNING
"BTRFS: failed to recover relocation\n");
@ -3067,6 +3073,8 @@ retry_root_backup:
goto fail_qgroup;
}
}
mutex_unlock(&fs_info->cleaner_mutex);
cleaner_mutex_locked = false;
location.objectid = BTRFS_FS_TREE_OBJECTID;
location.type = BTRFS_ROOT_ITEM_KEY;
@ -3180,6 +3188,10 @@ fail_cleaner:
filemap_write_and_wait(fs_info->btree_inode->i_mapping);
fail_sysfs:
if (cleaner_mutex_locked) {
mutex_unlock(&fs_info->cleaner_mutex);
cleaner_mutex_locked = false;
}
btrfs_sysfs_remove_mounted(fs_info);
fail_fsdev_sysfs: