1
0
Fork 0

writeback: update wb_over_bg_thresh() to use wb_domain aware operations

wb_over_bg_thresh() currently uses global_dirty_limits() and
wb_dirty_limit() both of which are wrappers around operations which
take dirty_throttle_control.  For cgroup writeback support, the
function will be updated to also consider memcg wb_domains which
requires the context information carried in dirty_throttle_control.

This patch updates wb_over_bg_thresh() so that it uses the underlying
wb_domain aware operations directly and builds the global
dirty_throttle_control in the process.

This patch doesn't introduce any behavioral changes.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Jan Kara <jack@suse.cz>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Greg Thelen <gthelen@google.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
hifive-unleashed-5.1
Tejun Heo 2015-05-22 18:23:32 -04:00 committed by Jens Axboe
parent aa661bbe1e
commit 947e9762a8
1 changed files with 12 additions and 5 deletions

View File

@ -1749,15 +1749,22 @@ EXPORT_SYMBOL(balance_dirty_pages_ratelimited);
*/
bool wb_over_bg_thresh(struct bdi_writeback *wb)
{
unsigned long background_thresh, dirty_thresh;
struct dirty_throttle_control gdtc_stor = { GDTC_INIT(wb) };
struct dirty_throttle_control * const gdtc = &gdtc_stor;
global_dirty_limits(&background_thresh, &dirty_thresh);
/*
* Similar to balance_dirty_pages() but ignores pages being written
* as we're trying to decide whether to put more under writeback.
*/
gdtc->avail = global_dirtyable_memory();
gdtc->dirty = global_page_state(NR_FILE_DIRTY) +
global_page_state(NR_UNSTABLE_NFS);
domain_dirty_limits(gdtc);
if (global_page_state(NR_FILE_DIRTY) +
global_page_state(NR_UNSTABLE_NFS) > background_thresh)
if (gdtc->dirty > gdtc->bg_thresh)
return true;
if (wb_stat(wb, WB_RECLAIMABLE) > wb_calc_thresh(wb, background_thresh))
if (wb_stat(wb, WB_RECLAIMABLE) > __wb_calc_thresh(gdtc))
return true;
return false;