结合www.kernel.org给出的官方解释以及centos7代码的理解先对dirty相关内核参数做一个概述:
1、vm.dirty_background_ratio
Contains, as a percentage of total available memory that contains free pages
and reclaimable pages, the number of pages at which the background kernel
flusher threads will start writing out dirty data.
The total available memory is not equal to total system memory.
当文件系统缓存脏页超过当前系统可用内存vm.dirty_background_ratio%时会唤醒内核后台进程回刷脏页,
唤醒脏数据回刷工作后进程直接返回并不会等待回收完成,最终回收工作还是由内核writeback线程完成。
vm.dirty_background_ratio这个比例是指可用内存的占比(未被使用的内存页和可以被回收的文件页),而不是系统内存总量的占比,该值范围为[0, 100]。
2、vm.dirty_ratio
Contains, as a percentage of total available memory that contains free pages
and reclaimable pages, the number of pages at which a process which is
generating disk writes will itself start writing out dirty data.
The total available memory is not equal to total system memory.
当文件系统缓存脏页超过当前系统可用内存vm.dirty_ratio%时,当有进程写文件时会触发回写脏页并且进程进入
IO阻塞状态同步等待,减缓数据写入速度,等待时间范围[10ms,200ms],具体参考https://lwn.net/Articles/405076/。
vm.dirty_ratio这个比例是指可用内存的占比(未被使用的内存页和可以被回收的文件页),而不是系统内存总量的占比。
dirty_background_ratio与dirty_ratio的差别在于前者只是唤醒回刷进程writeback线,
此时应用依然可以异步写数据到Cache,当脏数据比例继续增加到触发dirty_ratio条件时,
应用进程会被设置为TASK_KILLABLE状态同步等待回写完成或超时.
vm.dirty_expire_centisecs
This tunable is used to define when dirty data is old enough to be eligible
for writeout by the kernel flusher threads. It is expressed in 100'ths
of a second. Data which has been dirty in-memory for longer than this
interval will be written out next time a flusher thread wakes up.
脏数据超时回刷时间(单位:1/100s),当内存中有dirty inode时会周期性唤醒内核writeback线程
通过wb_check_old_data_flush来回刷已经存在超过dirty_expire_centisecs/100 秒的脏数据.
vm.dirty_writeback_centisecs
The kernel flusher threads will periodically wake up and write `old' data
out to disk. This tunable expresses the interval between those wakeups, in
100'ths of a second.
Setting this to zero disables periodic writeback altogether.
回刷进程定时唤醒时间(单位:1/100s),当内存中存在dirty inode时内核writeback线程
以dirty_writeback_centisecs/100 秒的周期被唤醒执行。
基于centos7.x内核3.10.0-1062.18.1.el7来看下这些配置项是怎么生效的:
ps命令可以看到有一个writeback内核线程:
# ps aux | grep "writeback" | grep -v grep
root 31 0.0 0.0 0 0 ? S< May16 0:00 [writeback]
这是一个关联到bdi_wq工作队列的线程,在default_bdi_init函数中创建:
static int __init default_bdi_init(void)
{
...
bdi_wq = alloc_workqueue("writeback", WQ_MEM_RECLAIM | WQ_FREEZABLE |
... WQ_UNBOUND | WQ_SYSFS, 0);
return err;
}
工作任务对应的函数为bdi_writeback_workfn,该函数也就是内核writeback被唤醒后执行的回写函数:
static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
{
...
INIT_DELAYED_WORK(&wb->dwork, bdi_writeback_workfn);
}
void bdi_writeback_workfn(struct work_struct *work)
{
...
/*工作队列中还有未处理的任务再次唤醒writeback线程*/
if (!list_empty(&bdi->work_list))
mod_delayed_work(bdi_wq, &wb->dwork, 0);
/*还有dirty数据,延迟dirty_writeback_interval * 10毫秒唤醒writeback线程,
相当于以dirty_writeback_interval * 10为周期回刷脏页直到内存再无脏页*/
else if (wb_has_dirty_io(wb) && dirty_writeback_interval)
bdi_wakeup_thread_delayed(bdi);//另一处调用延迟唤醒函数是__mark_inode_dirty,相当于出现dirty node就会唤醒writeback周期性回写
...
}
void bdi_wakeup_thread_delayed(struct backing_dev_info *bdi)
{
unsigned long timeout;
timeout = msecs_to_jiffies(dirty_writeback_interval * 10);//dirty_writeback_interval对应vm.dirty_writeback_centisecs
spin_lock_bh(&bdi->wb_lock);
if (test_bit(BDI_registered, &bdi->state))
queue_delayed_work(bdi_wq, &bdi->wb.dwork, timeout);
spin_unlock_bh(&bdi->wb_lock);
}
static long wb_do_writeback(struct bdi_writeback *wb)
{
...
/*
* Check for periodic writeback, kupdated() style
*/
wrote += wb_check_old_data_flush(wb);//仅回写内存脏数据超dirty_expire_centisecs*10 毫秒的脏数据
wrote += wb_check_background_flush(wb);//仅当当前dirty page数量超过background_thresh才执行回写脏页
clear_bit(BDI_writeback_running, &wb->bdi->state);
return wrote;
}
vm.dirty_background_ratio和vm.dirty_ratio参数在balance_dirty_pages函数中被使用,
这个函数当有应用进程生成脏数据时被调用:
static void balance_dirty_pages(struct address_space *mapping,
unsigned long pages_dirtied)
{
...
for (;;) {
...
/*
* Unstable writes are a feature of certain networked
* filesystems (i.e. NFS) in which data may have been
* written to the server's write cache, but has not yet
* been flushed to permanent storage.
*/
nr_reclaimable = global_page_state(NR_FILE_DIRTY) +
global_page_state(NR_UNSTABLE_NFS);
nr_dirty = nr_reclaimable + global_page_state(NR_WRITEBACK);//当前脏页数量
//background_thresh对应dirty_background_ratio结算结果, dirty_thresh对应dirty_ratio计算结果
global_dirty_limits(&background_thresh, &dirty_thresh);
....
}
....
}
void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty)
{
unsigned long background;
unsigned long dirty;
unsigned long uninitialized_var(available_memory);
struct task_struct *tsk;
if (!vm_dirty_bytes || !dirty_background_bytes) //vm_dirty_bytes和dirty_background_bytes默认为0
available_memory = global_dirtyable_memory();//这里获取的是当前系统可用内存页数量,后面细说global_dirtyable_memory
if (vm_dirty_bytes)//默认为0
dirty = DIV_ROUND_UP(vm_dirty_bytes, PAGE_SIZE);
else
dirty = (vm_dirty_ratio * available_memory) / 100;//vm_dirty_ratio脏页比率
if (dirty_background_bytes)
background = DIV_ROUND_UP(dirty_background_bytes, PAGE_SIZE);
else
background = (dirty_background_ratio * available_memory) / 100;//dirty_background_ratio脏页比率
if (background >= dirty)//确保dirty_background_ratio计算出来的比率必须小于vm_dirty_ratio,不然容易导致大量写IO进程进入D状态
background = dirty / 2;//如果dirty_background_ratio>=vm_dirty_ratio则效果相当于dirty_background_ratio=vm_dirty_ratio/2
tsk = current;
if (tsk->flags & PF_LESS_THROTTLE || rt_task(tsk)) {
background += background / 4;
dirty += dirty / 4;
}
*pbackground = background;//当有进程写数据时调用到balance_dirty_pages时检查到脏页数量超过该值将回写任务提交给workqueue,并唤醒“writeback”线程执行bdi_writeback_workfn
*pdirty = dirty;//当有进程写数据时调用到balance_dirty_pages时检查到脏页数量超过该值时将把进程设置为D状态挂起10ms到200ms之间,减缓数据写入速度。具体参考https://lwn.net/Articles/405076/
trace_global_dirty_state(background, dirty);
}
static unsigned long global_dirtyable_memory(void)
{
unsigned long x;
x = global_page_state(NR_FREE_PAGES);//当前free page
x -= min(x, dirty_balance_reserve);//减掉dirty_balance_reserve预留内存页,该值由calculate_totalreserve_pages函数根据内存总大小动态计算
x += global_page_state(NR_INACTIVE_FILE);//inactive file page
x += global_page_state(NR_ACTIVE_FILE);//active file page
if (!vm_highmem_is_dirtyable)
x -= highmem_dirtyable_memory(x); //64位系统目前为0,具体查看/proc/buddyinfo
/* Subtract min_free_kbytes */
x -= min_t(unsigned long, x, min_free_kbytes >> (PAGE_SHIFT - 10));//减掉预留的最小空闲内存vm.min_free_kbytes
return x + 1; /* Ensure that we never return 0 */
}
在未设置vm.dirty_background_bytes和vm.dirty_bytes前提下总结下dirty_background_ratio和dirty_ratio的作用就是:
available_memory=NR_FREE_PAGES-dirty_balance_reserve+NR_INACTIVE_FILE+NR_ACTIVE_FILE-(min_free_kbytes/4)
background_thresh=(dirty_background_ratio * available_memory) / 100=(vm.dirty_background_ratio*available_memory)/100
dirty_thresh = (vm_dirty_ratio * available_memory) / 100 =(vm.dirty_ratio*available_memory)/100
dirty_background_ratio的值必须小于dirty_ratio,如果设置dirty_background_ratio大于或等于dirty_ratio时,最后生效的值实际上为:
dirty_background_ratio=dirty_ratio/2. 之所以要保证dirty_ratio比dirty_background_ratio大的原因是为了避免因
系统脏页数量小于background_thresh未唤醒后台进程回写脏数据,大于dirty_thresh导致应用进程因等待脏数据回写而进入IO阻塞状态。
根据上面的分析,可以总结出针对不同场景这些参数的调整策略:
vm.dirty_background_ratio
vm.dirty_ratio
vm.dirty_expire_centisecs
vm.dirty_writeback_centisecs
1. 追求数据安全的场景适当调小这四个参数让脏数据尽快回刷磁盘;
2. 追求更高的性能而忽略丢数据风险则适当调大这些参数,增加内存缓存,减少IO操作;
3. 有不定时IO突增情况则适当调小dirty_background_ratio和增大dirty_ratio.
---来自腾讯云社区的---CD
微信扫一扫打赏
支付宝扫一扫打赏