鼎盛1320-88-39 2:72这01320是什么故障时间?

RTFSC: Read The Fucking Source Code
Linux schedule 2、调度算法
2、调度算法
linux进程一般分成了实时进程(RT)和普通进程,linux使用sched_class结构来管理不同类型进程的调度算法:rt_sched_class负责实时类进程(SCHED_FIFO/SCHED_RR)的调度,fair_sched_class负责普通进程(SCHED_NORMAL)的调度,还有idle_sched_class(SCHED_IDLE)、dl_sched_class(SCHED_DEADLINE)都比较简单和少见;
实时进程的调度算法移植都没什么变化,SCHED_FIFO类型的谁优先级高就一直抢占/SCHED_RR相同优先级的进行时间片轮转。
所以我们常说的调度算法一般指普通进程(SCHED_NORMAL)的调度算法,这类进程也在系统中占大多数。在2.6.24以后内核引入的是CFS算法,这个也是现在的主流;在这之前2.6内核使用的是一种O(1)算法;
2.1、linux2.6的O(1)调度算法
linux进程的优先级有140种,其中优先级(0-99)对应实时进程,优先级(100-139)对应普通进程,nice(0)对应优先级120,nice(-10)对应优先级100,nice(19)对应优先级139。
* Convert user-nice values [ -20 ... 0 ... 19 ]
* to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
* and back.
* 'User priority' is the nice value converted to something we
* can work with better when scaling various scheduler parameters,
* it's a [ 0 ... 39 ] range.
#define USER_PRIO(p)
((p)-MAX_RT_PRIO)
#define TASK_USER_PRIO(p)
USER_PRIO((p)-&static_prio)
#define MAX_USER_PRIO
(USER_PRIO(MAX_PRIO))
O(1)调度算法主要包含以下内容:
(1)、每个cpu的rq包含两个140个成员的链表数组rq-&active、rq-&expired;
任务根据优先级挂载到不同的数组当中,时间片没有用完放在rq-&active,时间片用完后放到rq-&expired,在rq-&active所有任务时间片用完为空后rq-&active和rq-&expired相互反转。
在schedule()中pcik next task时,首先会根据array-&bitmap找出哪个最先优先级还有任务需要调度,然后根据index找到 对应的优先级任务链表。因为查找bitmap的在IA处理器上可以通过bsfl等一条指令来实现,所以他的复杂度为O(1)。
asmlinkage void __sched schedule(void)
idx = sched_find_first_bit(array-&bitmap);
queue = array-&queue +
next = list_entry(queue-&next, task_t, run_list);
(2)、进程优先级分为静态优先级(p-&static_prio)、动态优先级(p-&prio);
静态优先级(p-&static_prio)决定进程时间片的大小:
* task_timeslice() scales user-nice values [ -20 ... 0 ... 19 ]
* to time slice values: [800ms ... 100ms ... 5ms]
* The higher a thread's priority, the bigger timeslices
* it gets during one round of execution. But even the lowest
* priority thread gets MIN_TIMESLICE worth of execution time.
/* 根据算法如果nice(0)的时间片为100mS,那么nice(-20)时间片为800ms、nice(19)时间片为5ms */
#define SCALE_PRIO(x, prio) \
max(x * (MAX_PRIO - prio) / (MAX_USER_PRIO/2), MIN_TIMESLICE)
static unsigned int task_timeslice(task_t *p)
if (p-&static_prio & NICE_TO_PRIO(0))
return SCALE_PRIO(DEF_TIMESLICE*4, p-&static_prio);
return SCALE_PRIO(DEF_TIMESLICE, p-&static_prio);
#define MIN_TIMESLICE
max(5 * HZ / 1000, 1)
#define DEF_TIMESLICE
(100 * HZ / 1000)
动态优先级决定进程在rq-&active、rq-&expired进程链表中的index:
static void enqueue_task(struct task_struct *p, prio_array_t *array)
sched_info_queued(p);
list_add_tail(&p-&run_list, array-&queue + p-&prio);
__set_bit(p-&prio, array-&bitmap);
array-&nr_active++;
p-&array = array;
动态优先级和静态优先级之间的转换函数:动态优先级=max(100 , min(静态优先级 – bonus + 5) , 139)
* effective_prio - return the priority that is based on the static
* priority but is modified by bonuses/penalties.
* We scale the actual sleep average [0 .... MAX_SLEEP_AVG]
* into the -5 ... 0 ... +5 bonus/penalty range.
* We use 25% of the full 0...39 priority range so that:
* 1) nice +19 interactive tasks do not preempt nice 0 CPU hogs.
* 2) nice -20 CPU hogs do not get preempted by nice 0 tasks.
* Both properties are important to certain workloads.
static int effective_prio(task_t *p)
int bonus,
if (rt_task(p))
return p-&
bonus = CURRENT_BONUS(p) - MAX_BONUS / 2;
// MAX_BONUS = 10
prio = p-&static_prio -
if (prio & MAX_RT_PRIO)
prio = MAX_RT_PRIO;
if (prio & MAX_PRIO-1)
prio = MAX_PRIO-1;
从上面看出动态优先级是以静态优先级为基础,再加上相应的惩罚或奖励(bonus)。这个bonus并不是随机的产生,而是根据进程过去的平均睡眠时间做相应的惩罚或奖励。所谓平均睡眠时间(sleep_avg,位于task_struct结构中)就是进程在睡眠状态所消耗的总时间数,这里的平均并不是直接对时间求平均数。
(3)、根据平均睡眠时间判断进程是否是交互式进程(INTERACTIVE);
交互式进程的好处?交互式进程时间片用完会重新进入active队列;
void scheduler_tick(void)
if (!--p-&time_slice) {
dequeue_task(p, rq-&active);
set_tsk_need_resched(p);
p-&prio = effective_prio(p);
p-&time_slice = task_timeslice(p);
p-&first_time_slice = 0;
if (!rq-&expired_timestamp)
rq-&expired_timestamp =
if (!TASK_INTERACTIVE(p) || EXPIRED_STARVING(rq)) {
enqueue_task(p, rq-&expired);
if (p-&static_prio & rq-&best_expired_prio)
rq-&best_expired_prio = p-&static_
enqueue_task(p, rq-&active);
判断进程是否是交互式进程(INTERACTIVE)的公式:动态优先级≤3*静态优先级/4 + 28
((p)-&prio &= (p)-&static_prio - DELTA(p))
平均睡眠时间的算法和交互进程的思想,我没有详细去看大家可以参考一下的一些描述:
所谓平均睡眠时间(sleep_avg,位于task_struct结构中)就是进程在睡眠状态所消耗的总时间数,这里的平均并不是直接对时间求平均数。平均睡眠时间随着进程的睡眠而增长,随着进程的运行而减少。因此,平均睡眠时间记录了进程睡眠和执行的时间,它是用来判断进程交互性强弱的关键数据。如果一个进程的平均睡眠时间很大,那么它很可能是一个交互性很强的进程。反之,如果一个进程的平均睡眠时间很小,那么它很可能一直在执行。另外,平均睡眠时间也记录着进程当前的交互状态,有很快的反应速度。比如一个进程在某一小段时间交互性很强,那么sleep_avg就有可能暴涨(当然它不能超过 MAX_SLEEP_AVG),但如果之后都一直处于执行状态,那么sleep_avg就又可能一直递减。理解了平均睡眠时间,那么bonus的含义也就显而易见了。交互性强的进程会得到调度程序的奖励(bonus为正),而那些一直霸占CPU的进程会得到相应的惩罚(bonus为负)。其实bonus相当于平均睡眠时间的缩影,此时只是将sleep_avg调整成bonus数值范围内的大小。
O(1)调度器区分交互式进程和批处理进程的算法与以前虽大有改进,但仍然在很多情况下会失效。有一些著名的程序总能让该调度器性能下降,导致交互式进程反应缓慢。例如fiftyp.c, thud.c, chew.c, ring-test.c, massive_intr.c等。而且O(1)调度器对NUMA支持也不完善。
2.2、CFS调度算法
针对O(1)算法出现的问题(具体是哪些问题我也理解不深说不上来),linux推出了CFS(Completely Fair Scheduler)完全公平调度算法。该算法从楼梯调度算法(staircase scheduler)和RSDL(Rotating Staircase Deadline Scheduler)发展而来,抛弃了复杂的active/expire数组和交互进程计算,把所有进程一视同仁都放到一个执行时间的红黑树中,实现了完全公平的思想。
CFS的主要思想如下:
根据普通进程的优先级nice值来定一个比重(weight),该比重用来计算进程的实际运行时间到虚拟运行时间(vruntime)的换算;不言而喻优先级高的进程运行更多的时间和优先级低的进程运行更少的时间在vruntime上市等价的;
根据rq-&cfs_rq中进程的数量计算一个总的period周期,每个进程再根据自己的weight占整个的比重来计算自己的理想运行时间(ideal_runtime),在scheduler_tick()中判断如果进程的实际运行时间(exec_runtime)已经达到理想运行时间(ideal_runtime),则进程需要被调度test_tsk_need_resched(curr)。有了period,那么cfs_rq中所有进程在period以内必会得到调度;
根据进程的虚拟运行时间(vruntime),把rq-&cfs_rq中的进程组织成一个红黑树(平衡二叉树),那么在pick_next_entity时树的最左节点就是运行时间最少的进程,是最好的需要调度的候选人;
每个进程的vruntime = runtime * (NICE_0_LOAD/nice_n_weight)
/* 该表的主要思想是,高一个等级的weight是低一个等级的 1.25 倍 */
* Nice levels are multiplicative, with a gentle 10% change for every
* nice level changed. I.e. when a CPU-bound task goes from nice 0 to
* nice 1, it will get ~10% less CPU time than another CPU-bound task
* that remained on nice 0.
* The "10% effect" is relative and cumulative: from _any_ nice level,
* if you go up 1 level, it's -10% CPU usage, if you go down 1 level
* it's +10% CPU usage. (to achieve that we use a multiplier of 1.25.
* If a task goes up by ~10% and another task goes down by ~10% then
* the relative distance between them is ~25%.)
static const int prio_to_weight[40] = {
nice(0)对应的weight是NICE_0_LOAD(1024),nice(-1)对应的weight是NICE_0_LOAD*1.25,nice(1)对应的weight是NICE_0_LOAD/1.25。
NICE_0_LOAD(1024)在schedule计算中是一个非常神奇的数字,他的含义就是基准”1”。因为kernel不能表示小数,所以把1放大称为1024。
scheduler_tick() -& task_tick_fair() -& update_curr():
static void update_curr(struct cfs_rq *cfs_rq)
curr-&sum_exec_runtime += delta_
schedstat_add(cfs_rq, exec_clock, delta_exec);
curr-&vruntime += calc_delta_fair(delta_exec, curr);
update_min_vruntime(cfs_rq);
static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se)
if (unlikely(se-&load.weight != NICE_0_LOAD))
delta = __calc_delta(delta, NICE_0_LOAD, &se-&load);
scheduler_tick()中根据cfs_rq中的se数量计算period和ideal_time,判断当前进程时间是否用完需要调度:
scheduler_tick() -& task_tick_fair() -& entity_tick() -& check_preempt_tick():
static void
check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
unsigned long ideal_runtime, delta_
struct sched_entity *
ideal_runtime = sched_slice(cfs_rq, curr);
delta_exec = curr-&sum_exec_runtime - curr-&prev_sum_exec_
if (delta_exec & ideal_runtime) {
resched_curr(rq_of(cfs_rq));
clear_buddies(cfs_rq, curr);
if (delta_exec & sysctl_sched_min_granularity)
se = __pick_first_entity(cfs_rq);
delta = curr-&vruntime - se-&
if (delta & 0)
if (delta & ideal_runtime)
resched_curr(rq_of(cfs_rq));
static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
u64 slice = __sched_period(cfs_rq-&nr_running + !se-&on_rq);
for_each_sched_entity(se) {
struct load_weight *
struct load_
cfs_rq = cfs_rq_of(se);
load = &cfs_rq-&
if (unlikely(!se-&on_rq)) {
lw = cfs_rq-&
update_load_add(&lw, se-&load.weight);
slice = __calc_delta(slice, se-&load.weight, load);
static u64 __sched_period(unsigned long nr_running)
if (unlikely(nr_running & sched_nr_latency))
return nr_running * sysctl_sched_min_
return sysctl_sched_
unsigned int sysctl_sched_min_granularity = 750000ULL;
unsigned int normalized_sysctl_sched_min_granularity = 750000ULL;
static unsigned int sched_nr_latency = 8;
unsigned int sysctl_sched_latency = 6000000ULL;
unsigned int normalized_sysctl_sched_latency = 6000000ULL;
2.2.3、红黑树(Red Black Tree)
红黑树又称为平衡二叉树,它的特点:
1、平衡。从根节点到叶子节点之间的任何路径,差值不会超过1。所以pick_next_task()复杂度为O(log n)。可以看到pick_next_task()复杂度是大于o(1)算法的,但是最大路径不会超过log2(n) - 1,复杂度是可控的。
2、排序。左边的节点一定小于右边的节点,所以最左边节点是最小值。
按照进程的vruntime组成了红黑树:
enqueue_task_fair() -& enqueue_entity() -& __enqueue_entity():
static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
struct rb_node **link = &cfs_rq-&tasks_timeline.rb_
struct rb_node *parent = NULL;
struct sched_entity *
int leftmost = 1;
while (*link) {
parent = *link;
entry = rb_entry(parent, struct sched_entity, run_node);
if (entity_before(se, entry)) {
link = &parent-&rb_
link = &parent-&rb_
leftmost = 0;
if (leftmost)
cfs_rq-&rb_leftmost = &se-&run_
rb_link_node(&se-&run_node, parent, link);
rb_insert_color(&se-&run_node, &cfs_rq-&tasks_timeline);
2.2.4、sched_entity和task_group
因为新的内核加入了task_group的概念,所以现在不是使用task_struct结构直接参与到schedule计算当中,而是使用sched_entity结构。一个sched_entity结构可能是一个task也可能是一个task_group-&se[cpu]。上图非常好的描述了这些结构之间的关系。
其中主要的层次关系如下:
1、一个cpu只对应一个
2、一个rq有一个cfs_rq;
3、cfs_rq使用红黑树组织多个同一层级的sched_entity;
4、如果sched_entity对应的是一个task_struct,那sched_entity和task是一对一的关系;
5、如果sched_entity对应的是task_group,那么他是一个task_group多个sched_entity中的一个。task_group有一个数组se[cpu],在每个cpu上都有一个sched_entity。这种类型的sched_entity有自己的cfs_rq,一个sched_entity对应一个cfs_rq(se-&my_q),cfs_rq再继续使用红黑树组织多个同一层级的sched_entity;3-5的层次关系可以继续递归下去。
2.2.5、scheduler_tick()
关于算法,最核心的部分都在scheduler_tick()函数当中,所以我们来详细的解析这部分代码。
void scheduler_tick(void)
int cpu = smp_processor_id();
struct rq *rq = cpu_rq(cpu);
struct task_struct *curr = rq-&
sched_clock_tick();
#ifdef CONFIG_MTK_SCHED_MONITOR
mt_trace_rqlock_start(&rq-&lock);
raw_spin_lock(&rq-&lock);
#ifdef CONFIG_MTK_SCHED_MONITOR
mt_trace_rqlock_end(&rq-&lock);
update_rq_clock(rq);
curr-&sched_class-&task_tick(rq, curr, 0);
update_cpu_load_active(rq);
calc_global_load_tick(rq);
sched_freq_tick(cpu);
raw_spin_unlock(&rq-&lock);
perf_event_task_tick();
#ifdef CONFIG_MTK_SCHED_MONITOR
mt_save_irq_counts(SCHED_TICK);
#ifdef CONFIG_SMP
rq-&idle_balance = idle_cpu(cpu);
trigger_load_balance(rq);
rq_last_tick_reset(rq);
static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
struct cfs_rq *cfs_
struct sched_entity *se = &curr-&
for_each_sched_entity(se) {
cfs_rq = cfs_rq_of(se);
entity_tick(cfs_rq, se, queued);
if (static_branch_unlikely(&sched_numa_balancing))
task_tick_numa(rq, curr);
if (!rq-&rd-&overutilized && cpu_overutilized(task_cpu(curr)))
rq-&rd-&overutilized = true;
static void
entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued)
update_curr(cfs_rq);
update_load_avg(curr, 1);
update_cfs_shares(cfs_rq);
#ifdef CONFIG_SCHED_HRTICK
if (queued) {
resched_curr(rq_of(cfs_rq));
if (!sched_feat(DOUBLE_TICK) &&
hrtimer_active(&rq_of(cfs_rq)-&hrtick_timer))
if (cfs_rq-&nr_running & 1)
check_preempt_tick(cfs_rq, curr);
static void update_curr(struct cfs_rq *cfs_rq)
struct sched_entity *curr = cfs_rq-&
u64 now = rq_clock_task(rq_of(cfs_rq));
u64 delta_
if (unlikely(!curr))
delta_exec = now - curr-&exec_
if (unlikely((s64)delta_exec &= 0))
curr-&exec_start =
schedstat_set(curr-&statistics.exec_max,
max(delta_exec, curr-&statistics.exec_max));
curr-&sum_exec_runtime += delta_
schedstat_add(cfs_rq, exec_clock, delta_exec);
curr-&vruntime += calc_delta_fair(delta_exec, curr);
update_min_vruntime(cfs_rq);
if (entity_is_task(curr)) {
struct task_struct *curtask = task_of(curr);
trace_sched_stat_runtime(curtask, delta_exec, curr-&vruntime);
cpuacct_charge(curtask, delta_exec);
account_group_exec_runtime(curtask, delta_exec);
account_cfs_rq_runtime(cfs_rq, delta_exec);
关于cfs调度和vruntime,除了正常的scheduler_tick()的计算,还有些特殊时刻需要特殊处理。这些细节用一些疑问来牵引出来:
1、新进程的vruntime是多少?
假如新进程的vruntime初值为0的话,比老进程的值小很多,那么它在相当长的时间内都会保持抢占CPU的优势,老进程就要饿死了,这显然是不公平的。
CFS的做法是:取父进程vruntime(curr-&vruntime) 和 (cfs_rq-&min_vruntime + 假设se运行过一轮的值)之间的最大值,赋给新创建进程。把新进程对现有进程的调度影响降到最小。
_do_fork() -& copy_process() -& sched_fork() -& task_fork_fair():
static void task_fork_fair(struct task_struct *p)
se-&vruntime = curr-&
place_entity(cfs_rq, se, 1);
if (sysctl_sched_child_runs_first && curr && entity_before(curr, se)) {
swap(curr-&vruntime, se-&vruntime);
resched_curr(rq);
se-&vruntime -= cfs_rq-&min_
raw_spin_unlock_irqrestore(&rq-&lock, flags);
static void
place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
u64 vruntime = cfs_rq-&min_
if (initial && sched_feat(START_DEBIT))
vruntime += sched_vslice(cfs_rq, se);
if (!initial) {
unsigned long thresh = sysctl_sched_
if (sched_feat(GENTLE_FAIR_SLEEPERS))
thresh &&= 1;
vruntime -=
se-&vruntime = max_vruntime(se-&vruntime, vruntime);
static void
enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
if (!(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_WAKING))
se-&vruntime += cfs_rq-&min_
2、休眠进程的vruntime一直保持不变吗、
如果休眠进程的 vruntime 保持不变,而其他运行进程的 vruntime 一直在推进,那么等到休眠进程终于唤醒的时候,它的vruntime比别人小很多,会使它获得长时间抢占CPU的优势,其他进程就要饿死了。这显然是另一种形式的不公平。
CFS是这样做的:在休眠进程被唤醒时重新设置vruntime值,以min_vruntime值为基础,给予一定的补偿,但不能补偿太多。
static void
enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
if (flags & ENQUEUE_WAKEUP) {
/* (1) 计算进程唤醒后的vruntime */
place_entity(cfs_rq, se, 0);
enqueue_sleeper(cfs_rq, se);
static void
place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
/* (1.1) 初始值是cfs_rq的当前最小值min_vruntime */
u64 vruntime = cfs_rq-&min_
* The 'current' period is already promised to the current tasks,
* however the extra weight of the new task will slow them down a
* little, place the new task so that it fits in the slot that
* stays open at the end.
if (initial && sched_feat(START_DEBIT))
vruntime += sched_vslice(cfs_rq, se);
/* sleeps up to a single latency don't count. */
/* (1.2) 在最小值min_vruntime的基础上给予补偿,
默认补偿值是6ms(sysctl_sched_latency)
if (!initial) {
unsigned long thresh = sysctl_sched_
* Halve their sleep time's effect, to allow
* for a gentler effect of sleepers:
if (sched_feat(GENTLE_FAIR_SLEEPERS))
thresh &&= 1;
vruntime -=
/* ensure we never gain time by being placed backwards. */
se-&vruntime = max_vruntime(se-&vruntime, vruntime);
3、休眠进程在唤醒时会立刻抢占CPU吗?
进程被唤醒默认是会马上检查是否库抢占,因为唤醒的vruntime在cfs_rq的最小值min_vruntime基础上进行了补偿,所以他肯定会抢占当前的进程。
CFS可以通过禁止WAKEUP_PREEMPTION来禁止唤醒抢占,不过这也就失去了抢占特性。
try_to_wake_up() -& ttwu_queue() -& ttwu_do_activate() -& ttwu_do_wakeup() -& check_preempt_curr() -& check_preempt_wakeup()
static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
* Batch and idle tasks do not preempt non-idle tasks (their preemption
* is driven by the tick):
/* (1) 如果WAKEUP_PREEMPTION没有被设置,不进行唤醒时的抢占 */
if (unlikely(p-&policy != SCHED_NORMAL) || !sched_feat(WAKEUP_PREEMPTION))
resched_curr(rq);
4、进程从一个CPU迁移到另一个CPU上的时候vruntime会不会变?
不同cpu的负载时不一样的,所以不同cfs_rq里se的vruntime水平是不一样的。如果进程迁移vruntime不变也是非常不公平的。
CFS使用了一个很聪明的做法:在退出旧的cfs_rq时减去旧cfs_rq的min_vruntime,在加入新的cfq_rq时重新加上新cfs_rq的min_vruntime。
static void
dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
if (!(flags & DEQUEUE_SLEEP))
se-&vruntime -= cfs_rq-&min_
static void
enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
if (!(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_WAKING))
se-&vruntime += cfs_rq-&min_
2.2.7、cfs bandwidth
1、cfs bandwidth是针对task_group的配置,一个task_group的bandwidth使用一个struct cfs_bandwidth *cfs_b数据结构来控制。
struct cfs_bandwidth {
#ifdef CONFIG_CFS_BANDWIDTH
raw_spinlock_t lock;
s64 hierarchical_
u64 runtime_
int idle, period_
struct hrtimer period_
struct hrtimer slack_
struct list_head throttled_cfs_
int nr_periods, nr_
u64 throttled_
其中几个关键的数据结构:cfs_b-&period是监控周期,cfs_b-&quota是tg的运行配额,cfs_b-&runtime是tg剩余可运行的时间。cfs_b-&runtime在监控周期开始的时候等于cfs_b-&quota,随着tg不断运行不断减少,如果cfs_b-&runtime & 0说明tg已经超过bandwidth,触发流量控制;
cfs bandwidth是提供给CGROUP_SCHED使用的,所以cfs_b-&quota的初始值都是RUNTIME_INF无限大,所以在使能CGROUP_SCHED以后需要自己配置这些参数。
2、因为一个task_group是在percpu上都创建了一个cfs_rq,所以cfs_b-&quota的值是这些percpu cfs_rq中的进程共享的,每个percpu cfs_rq在运行时需要向tg-&cfs_bandwidth-&runtime来申请;
scheduler_tick() -& task_tick_fair() -& entity_tick() -& update_curr() -& account_cfs_rq_runtime()
static __always_inline
void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec)
if (!cfs_bandwidth_used() || !cfs_rq-&runtime_enabled)
__account_cfs_rq_runtime(cfs_rq, delta_exec);
static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec)
cfs_rq-&runtime_remaining -= delta_
expire_cfs_rq_runtime(cfs_rq);
if (likely(cfs_rq-&runtime_remaining & 0))
if (!assign_cfs_rq_runtime(cfs_rq) && likely(cfs_rq-&curr))
resched_curr(rq_of(cfs_rq));
static int assign_cfs_rq_runtime(struct cfs_rq *cfs_rq)
struct task_group *tg = cfs_rq-&
struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(tg);
u64 amount = 0, min_amount,
min_amount = sched_cfs_bandwidth_slice() - cfs_rq-&runtime_
raw_spin_lock(&cfs_b-&lock);
if (cfs_b-&quota == RUNTIME_INF)
amount = min_
start_cfs_bandwidth(cfs_b);
if (cfs_b-&runtime & 0) {
amount = min(cfs_b-&runtime, min_amount);
cfs_b-&runtime -=
cfs_b-&idle = 0;
expires = cfs_b-&runtime_
raw_spin_unlock(&cfs_b-&lock);
cfs_rq-&runtime_remaining +=
if ((s64)(expires - cfs_rq-&runtime_expires) & 0)
cfs_rq-&runtime_expires =
return cfs_rq-&runtime_remaining & 0;
3、在enqueue_task_fair()、put_prev_task_fair()、pick_next_task_fair()这几个时刻,会check cfs_rq是否已经达到throttle,如果达到cfs throttle会把cfs_rq dequeue停止运行;
enqueue_task_fair() -& enqueue_entity() -& check_enqueue_throttle() -& throttle_cfs_rq()
put_prev_task_fair() -& put_prev_entity() -& check_cfs_rq_runtime() -& throttle_cfs_rq()
pick_next_task_fair() -& check_cfs_rq_runtime() -& throttle_cfs_rq()
static void check_enqueue_throttle(struct cfs_rq *cfs_rq)
if (!cfs_bandwidth_used())
/* an active group must be handled by the update_curr()-&put() path */
if (!cfs_rq-&runtime_enabled || cfs_rq-&curr)
/* (1.1) 如果已经throttle,直接返回 */
/* ensure the group is not already throttled */
if (cfs_rq_throttled(cfs_rq))
/* update runtime allocation */
/* (1.2) 更新最新的cfs运行时间 */
account_cfs_rq_runtime(cfs_rq, 0);
/* (1.3) 如果cfs_rq-&runtime_remaining&=0,启动throttle */
if (cfs_rq-&runtime_remaining &= 0)
throttle_cfs_rq(cfs_rq);
/* conditionally throttle active cfs_rq's from put_prev_entity() */
static bool check_cfs_rq_runtime(struct cfs_rq *cfs_rq)
if (!cfs_bandwidth_used())
return false;
/* (2.1) 如果cfs_rq-&runtime_remaining还有运行时间,直接返回 */
if (likely(!cfs_rq-&runtime_enabled || cfs_rq-&runtime_remaining & 0))
return false;
* it's possible for a throttled entity to be forced into a running
* state (e.g. set_curr_task), in this case we're finished.
/* (2.2) 如果已经throttle,直接返回 */
if (cfs_rq_throttled(cfs_rq))
return true;
/* (2.3) 已经throttle,执行throttle动作 */
throttle_cfs_rq(cfs_rq);
return true;
static void throttle_cfs_rq(struct cfs_rq *cfs_rq)
struct rq *rq = rq_of(cfs_rq);
struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq-&tg);
struct sched_entity *
long task_delta, dequeue = 1;
se = cfs_rq-&tg-&se[cpu_of(rq_of(cfs_rq))];
/* freeze hierarchy runnable averages while throttled */
rcu_read_lock();
walk_tg_tree_from(cfs_rq-&tg, tg_throttle_down, tg_nop, (void *)rq);
rcu_read_unlock();
task_delta = cfs_rq-&h_nr_
for_each_sched_entity(se) {
struct cfs_rq *qcfs_rq = cfs_rq_of(se);
/* throttled entity or throttle-on-deactivate */
if (!se-&on_rq)
/* (3.1) throttle的动作1:将cfs_rq dequeue停止运行 */
if (dequeue)
dequeue_entity(qcfs_rq, se, DEQUEUE_SLEEP);
qcfs_rq-&h_nr_running -= task_
if (qcfs_rq-&load.weight)
dequeue = 0;
sub_nr_running(rq, task_delta);
/* (3.2) throttle的动作2:将cfs_rq-&throttled置位 */
cfs_rq-&throttled = 1;
cfs_rq-&throttled_clock = rq_clock(rq);
raw_spin_lock(&cfs_b-&lock);
empty = list_empty(&cfs_b-&throttled_cfs_rq);
* Add to the _head_ of the list, so that an already-started
* distribute_cfs_runtime will not see us
list_add_rcu(&cfs_rq-&throttled_list, &cfs_b-&throttled_cfs_rq);
* If we're the first throttled task, make sure the bandwidth
* timer is running.
if (empty)
start_cfs_bandwidth(cfs_b);
raw_spin_unlock(&cfs_b-&lock);
4、对每一个tg的cfs_b,系统会启动一个周期性定时器cfs_b-&period_timer,运行周期为cfs_b-&period。主要作用是period到期后检查是否有cfs_rq被throttle,如果被throttle恢复它,并进行新一轮的监控;
sched_cfs_period_timer() -& do_sched_cfs_period_timer()
static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun)
u64 runtime, runtime_
/* no need to continue the timer with no bandwidth constraint */
if (cfs_b-&quota == RUNTIME_INF)
throttled = !list_empty(&cfs_b-&throttled_cfs_rq);
cfs_b-&nr_periods +=
* idle depends on !throttled (for the case of a large deficit), and if
* we're going inactive then everything else can be deferred
if (cfs_b-&idle && !throttled)
/* (1) 新周期的开始,给cfs_b-&runtime重新赋值为cfs_b-&quota */
__refill_cfs_bandwidth_runtime(cfs_b);
if (!throttled) {
/* mark as potentially idle for the upcoming period */
cfs_b-&idle = 1;
/* account preceding periods in which throttling occurred */
cfs_b-&nr_throttled +=
runtime_expires = cfs_b-&runtime_
* This check is repeated as we are holding onto the new bandwidth while
* we unthrottle. This can potentially race with an unthrottled group
* trying to acquire new bandwidth from the global pool. This can result
* in us over-using our runtime if it is all used during this loop, but
* only by limited amounts in that extreme case.
/* (2) 解除cfs_b-&throttled_cfs_rq中所有被throttle住的cfs_rq */
while (throttled && cfs_b-&runtime & 0) {
runtime = cfs_b-&
raw_spin_unlock(&cfs_b-&lock);
/* we can't nest cfs_b-&lock while distributing bandwidth */
runtime = distribute_cfs_runtime(cfs_b, runtime,
runtime_expires);
raw_spin_lock(&cfs_b-&lock);
throttled = !list_empty(&cfs_b-&throttled_cfs_rq);
cfs_b-&runtime -= min(runtime, cfs_b-&runtime);
* While we are ensured activity in the period following an
* unthrottle, this also covers the case in which the new bandwidth is
* insufficient to cover the existing bandwidth deficit.
(Forcing the
* timer to remain active while there are any throttled entities.)
cfs_b-&idle = 0;
out_deactivate:
static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b,
u64 remaining, u64 expires)
struct cfs_rq *cfs_
u64 starting_runtime =
rcu_read_lock();
list_for_each_entry_rcu(cfs_rq, &cfs_b-&throttled_cfs_rq,
throttled_list) {
struct rq *rq = rq_of(cfs_rq);
raw_spin_lock(&rq-&lock);
if (!cfs_rq_throttled(cfs_rq))
runtime = -cfs_rq-&runtime_remaining + 1;
if (runtime & remaining)
remaining -=
cfs_rq-&runtime_remaining +=
cfs_rq-&runtime_expires =
/* (2.1) 解除throttle */
/* we check whether we're throttled above */
if (cfs_rq-&runtime_remaining & 0)
unthrottle_cfs_rq(cfs_rq);
raw_spin_unlock(&rq-&lock);
if (!remaining)
rcu_read_unlock();
return starting_runtime -
void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
struct rq *rq = rq_of(cfs_rq);
struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq-&tg);
struct sched_entity *
int enqueue = 1;
long task_
se = cfs_rq-&tg-&se[cpu_of(rq)];
cfs_rq-&throttled = 0;
update_rq_clock(rq);
raw_spin_lock(&cfs_b-&lock);
cfs_b-&throttled_time += rq_clock(rq) - cfs_rq-&throttled_
list_del_rcu(&cfs_rq-&throttled_list);
raw_spin_unlock(&cfs_b-&lock);
/* update hierarchical throttle state */
walk_tg_tree_from(cfs_rq-&tg, tg_nop, tg_unthrottle_up, (void *)rq);
if (!cfs_rq-&load.weight)
task_delta = cfs_rq-&h_nr_
for_each_sched_entity(se) {
if (se-&on_rq)
enqueue = 0;
cfs_rq = cfs_rq_of(se);
/* (2.1.1) 重新enqueue运行 */
if (enqueue)
enqueue_entity(cfs_rq, se, ENQUEUE_WAKEUP);
cfs_rq-&h_nr_running += task_
if (cfs_rq_throttled(cfs_rq))
add_nr_running(rq, task_delta);
/* determine whether we need to wake up potentially idle cpu */
if (rq-&curr == rq-&idle && rq-&cfs.nr_running)
resched_curr(rq);
2.2.8、sched sysctl参数
系统在sysctl中注册了很多sysctl参数供我们调优使用,在”/proc/sys/kernel/”目录下可以看到:
# ls /proc/sys/kernel/sched_*
sched_cfs_boost
sched_child_runs_first
sched_latency_ns
sched_migration_cost_ns
sched_min_granularity_ns
sched_nr_migrate
sched_rr_timeslice_ms
sched_rt_period_us
sched_rt_runtime_us
sched_shares_window_ns
sched_time_avg_ms
sched_tunable_scaling
sched_wakeup_granularity_ns
kern_table[]中也有相关的定义:
static struct ctl_table kern_table[] = {
= "sched_child_runs_first",
= &sysctl_sched_child_runs_first,
= sizeof(unsigned int),
.proc_handler
= proc_dointvec,
#ifdef CONFIG_SCHED_DEBUG
= "sched_min_granularity_ns",
= &sysctl_sched_min_granularity,
= sizeof(unsigned int),
.proc_handler
= sched_proc_update_handler,
= &min_sched_granularity_ns,
= &max_sched_granularity_ns,
= "sched_latency_ns",
= &sysctl_sched_latency,
= sizeof(unsigned int),
.proc_handler
= sched_proc_update_handler,
= &min_sched_granularity_ns,
= &max_sched_granularity_ns,
= "sched_wakeup_granularity_ns",
= &sysctl_sched_wakeup_granularity,
= sizeof(unsigned int),
.proc_handler
= sched_proc_update_handler,
= &min_wakeup_granularity_ns,
= &max_wakeup_granularity_ns,
#ifdef CONFIG_SMP
= "sched_tunable_scaling",
= &sysctl_sched_tunable_scaling,
= sizeof(enum sched_tunable_scaling),
.proc_handler
= sched_proc_update_handler,
= &min_sched_tunable_scaling,
= &max_sched_tunable_scaling,
= "sched_migration_cost_ns",
= &sysctl_sched_migration_cost,
= sizeof(unsigned int),
.proc_handler
= proc_dointvec,
= "sched_nr_migrate",
= &sysctl_sched_nr_migrate,
= sizeof(unsigned int),
.proc_handler
= proc_dointvec,
= "sched_time_avg_ms",
= &sysctl_sched_time_avg,
= sizeof(unsigned int),
.proc_handler
= proc_dointvec,
= "sched_shares_window_ns",
= &sysctl_sched_shares_window,
= sizeof(unsigned int),
.proc_handler
= proc_dointvec,
#endif /* CONFIG_SMP */
#endif /* CONFIG_SCHED_DEBUG */
= "sched_rt_period_us",
= &sysctl_sched_rt_period,
= sizeof(unsigned int),
.proc_handler
= sched_rt_handler,
= "sched_rt_runtime_us",
= &sysctl_sched_rt_runtime,
= sizeof(int),
.proc_handler
= sched_rt_handler,
= "sched_rr_timeslice_ms",
= &sched_rr_timeslice,
= sizeof(int),
.proc_handler
= sched_rr_handler,
#ifdef CONFIG_SCHED_AUTOGROUP
= "sched_autogroup_enabled",
= &sysctl_sched_autogroup_enabled,
= sizeof(unsigned int),
.proc_handler
= proc_dointvec_minmax,
#ifdef CONFIG_CFS_BANDWIDTH
= "sched_cfs_bandwidth_slice_us",
= &sysctl_sched_cfs_bandwidth_slice,
= sizeof(unsigned int),
.proc_handler
= proc_dointvec_minmax,
#ifdef CONFIG_SCHED_TUNE
= "sched_cfs_boost",
= &sysctl_sched_cfs_boost,
= sizeof(sysctl_sched_cfs_boost),
#ifdef CONFIG_CGROUP_SCHEDTUNE
.proc_handler
= &sysctl_sched_cfs_boost_handler,
= &one_hundred,
2.2.9、”/proc/sched_debug”
在/proc/sched_debug中会打印出详细的schedule相关的信息,对应的代码在”kernel/sched/debug.c”中实现:
# cat /proc/sched_debug
Sched Debug Version: v0.11, 4.4.22+ #95
sysctl_sched
.sysctl_sched_latency
: 10.000000
.sysctl_sched_min_granularity
: 2.250000
.sysctl_sched_wakeup_granularity
: 2.000000
.sysctl_sched_child_runs_first
.sysctl_sched_features
.sysctl_sched_tunable_scaling
: 0 (none)
cpu#0: Online
.nr_running
// rq-&nr_running,rq中总的可运行进程数,包括cfs_rq + cfs_rq + dl_rq
// rq-&load.weight,rq总的weight值
.nr_switches
.nr_load_updates
.nr_uninterruptible
.next_balance
.curr-&pid
// rq-&curr当前进程的pid
// rq总的运行时间,单位s
.clock_task
// rq总的task运行时间,单位s
.cpu_load[0]
// cpu级别的负载值,rq-&cpu_load[]
.cpu_load[1]
.cpu_load[2]
.cpu_load[3]
.cpu_load[4]
.yld_count
.sched_count
.sched_goidle
.max_idle_balance_cost
.ttwu_count
.ttwu_local
cfs_rq[0]:/bg_non_interactive
// 叶子cfs_rq,“/bg_non_interactive ”
.exec_clock
// cfs_rq-&exec_clock)
.MIN_vruntime
: 0.000001
.min_vruntime
.max_vruntime
: 0.000001
: 0.000000
.nr_spread_over
.nr_running
// cfs_rq-&nr_running,cfs_rq中总的可运行进程数
// cfs_rq-&load.weight
// cfs_rq-&avg.load_avg
.runnable_load_avg
// cfs_rq-&runnable_load_avg
// cfs_rq-&avg.util_avg
.removed_load_avg
.removed_util_avg
.tg_load_avg_contrib
.tg_load_avg
.se-&exec_start
// print_cfs_group_stats(),se = cfs_rq-&tg-&se[cpu]
.se-&vruntime
.se-&sum_exec_runtime
.se-&statistics.wait_start
: 0.000000
.se-&statistics.sleep_start
: 0.000000
.se-&statistics.block_start
: 0.000000
.se-&statistics.sleep_max
: 0.000000
.se-&statistics.block_max
: 0.000000
.se-&statistics.exec_max
: 384.672539
.se-&statistics.slice_max
: 110.416539
.se-&statistics.wait_max
: 461.053539
.se-&statistics.wait_sum
.se-&statistics.wait_count
.se-&load.weight
.se-&avg.load_avg
.se-&avg.util_avg
cfs_rq[0]:/
// 根cfs_rq,“/”
.exec_clock
.MIN_vruntime
: 0.000001
.min_vruntime
.max_vruntime
: 0.000001
: 0.000000
: 0.000000
.nr_spread_over
.nr_running
.runnable_load_avg
.removed_load_avg
.removed_util_avg
.tg_load_avg_contrib
.tg_load_avg
rt_rq[0]:/bg_non_interactive
// 叶子rt_rq,“/bg_non_interactive ”
.rt_nr_running
.rt_throttled
: 0.000000
.rt_runtime
: 700.000000
rt_rq[0]:/
// 根rt_rq,“/”
.rt_nr_running
.rt_throttled
: 0.000000
.rt_runtime
: 800.000000
.dl_nr_running
runnable tasks:
// 并不是rq中现在的runnable进程,而是逐个遍历进程,看看哪个进程最后是在当前cpu上运行。很多进程现在是睡眠状态;
// 上述的rq-&nr_running=1,只有一个进程处于runnable状态。但是打印出了几十条睡眠状态的进程;
// 第一列"R",说明是当前运行的进程rq-&curr
// "tree-key",p-&se.vruntime
// 进程的vruntime
// "wait-time",p-&se.statistics.wait_sum
// 进程在整个运行队列中的时间,runnable+running时间
// "sum-exec",p-&se.sum_exec_runtime
// 进程的执行累加时间,running时间
// "sum-sleep",p-&se.statistics.sum_sleep_runtime
// 进程的睡眠时间
----------------------------------------------------------------------------------------------------------
ksoftirqd/0
949 .952048 /
kworker/0:0H
867.040456
rcu_preempt
434 .722693 /
818 .208896 /
0.000000 /
migration/0
0.000000 /
22.657923 8217 /
22.409536 1828 /
pmic_thread
416.570075
cfinteractive
0.000000 /
ion_history
671 .028569 /
628.318084
213.543232 .848623 /
413.230915
0.065000 /
mt_gpufreq_inpu
0.000000 /
kworker/u20:2
055 .876435 /
disp_check
0.049692 /
disp_delay_trig
0.050769 /
kpi_wait_4_hw_v
teei_switch_thr
0.000000 /
hang_detect
0.063154 /
irq/680-stk_ps
0.000000 /
sub_touch_resum
457.923116
0.046539 /
irq/677-sub_tou
0.000000 /
irq/672-fuelg_i
0.000000 /
irq/845-primary
0.000000 /
dm_bufio_cache
0.924077 9970 /
binder_watchdog
cs35l35_eint
624.068320
0.049693 /
ipi_cpu_dvfs_rt
808 .787992 /
782 8003 /
692.938392
0.050000 /
0.031231 /
pvr_defer_free
pvr_device_wdg
242.453158
56.183535 .348265 /
744.637922
0.100154 /
dsx_rebuild_wor
753.712295
0.044077 /
223.511770
0.044230 /
0.037154 /
4.045230 /
4.056077 /
4.061615 /
4.060538 /
4.036615 /
4.015231 /
4.041154 /
test_report_wor
771.952673
0.036000 /
kworker/0:1H
jbd2/sdc40-8
101 .655632 /
ext4-rsv-conver
0.156692 /
ext4-rsv-conver
0.152924 /
ext4-rsv-conver
0.151307 /
logd.daemon
logd.writer
188 .794489 /
logd.klogd
358 .240325 /
logd.auditd
174.122396
logd.reader.per 25015 8782
390 2749 /
992 .851125 /
Secure Call
4.131307 /
4.057077 /
183.702927
116.953237 .246642 /
606.838765
528.611148 .481839 /
wmt_launcher
314 .290047 /
ccci_mdinit
286.497775
85.485541 .298508 /
servicemanager
411 .292706 /
Binder:387_1
SWWatchDog
814.071314
UEventThreadHWC
EventThread
POSIX timer 1
235.493060 3115 /
EventThread
FpsPolicyTracke
80.131910 8965 /
664.793393
816.670690 0500 /
Binder:387_4
Binder:387_5
155.111313
53.880768 4944 /
151.207084 2178 /
113.968152
44.631693 0178 /
42.700918 .276366 /
153.523075
12.365926 1180 /
238 .927166 /
0.000000 /
POSIX timer 4
249 .276171 /
atci_service
273.883295
486.930643 .567517 /
mobile_log_d.wc
289.837463
107.019845 9882 /
mobile_log_d.rd 25016 5569
943 6476 /
mobile_log_d.wr 25017 4937
717 7003 /
460.393235
376.842928 9565 /
409 .822825 /
374 .253205 /
mtkFlpDaemon
thermalloadalgo
413 .985265 /
459 .163426 /
batterywarning
311 .448146 /
POSIX timer 1
3.299769 /
POSIX timer 5
0.000000 /
POSIX timer 7
0.000000 /
65.431306 5953 /
895.585007
158.467918 6280 /
utgate_tlog
416 .719148 /
AudioOut_D
watchDogThread
139 .299636 /
Binder:547_2
127.843998
Binder:556_2
920.215623
0.000000 /
0.000000 /
0.053539 /
93.771539 .229153 /
Binder:556_1
166.588700
60.492451 3558 /
Binder:556_3
119.942772
40.034222 6163 /
Binder:556_4
116.159925
43.184541 0691 /
Binder:556_5
106.840998
47.108921 4699 /
Binder:556_6
139.239848
41.452539 6693 /
Binder:556_7
146.211150
62.160151 5620 /
Binder:556_8
122.075769
36.036701 4284 /
Binder:556_9
13.755926 4842 /
Binder:556_A 17406 4989
mpower_manager
846 .272171 /
176.774769 .266795 /
pvr_workqueue
3.945845 2276 /
pvr_workqueue
0.915768 3198 /
ksdioirqd/mmc0
0.000000 /
md1_tx1_worker
297.604536
76.314079 .686920 /
md1_rx1_worker
244.276380
18.966851 .341197 /
0.000000 /
debuggerd64
0.000000 /
mdl_sock_host
22.491925 2644 /
gsm0710muxd
043 .720321 /
gsm0710muxd
152 .739024 /
gsm0710muxd
671 .085414 /
gsm0710muxd
172 .574625 /
gsm0710muxd
18.009922 3608 /
gsm0710muxd
569 .732792 /
gsm0710muxd
mdl_sock_host
20.794536 2957 /
0.000000 /
954 .133367 /
652 .862808 /
882 .790758 /
196.371228
212.220923 4873 /
11.390847 8380 /
784 .730875 /
131.736455
40.676161 5787 /
0.000000 /
Ril Proxy reque
Ril Proxy reque
76.282690 7481 /
23.904236 0527 /
disp_queue_P0
0.065308 /
ReferenceQueueD
FinalizerDaemon
FinalizerWatchd
HeapTaskDaemon
349 .086557 /
Binder:1074_1
769 .469990 /
Binder:1074_2
182 .545158 /
Binder:1074_3
790 .052120 /
MessageMonitorS
0.560385 /
ActivityManager
455 .534049 /
batterystats-sy
FileObserver
297.350079 .021270 /
android.fg
AnrMonitorThrea
system_server
167 .422679 /
PackageManager
5622 /bg_non_interactive
system_server
734 .832906 /
SensorEventAckR
3.918613 7505 /
SensorService
821.968447
CameraService_p
0.270770 /
AlarmManager
706 .673051 /
InputDispatcher
InputReader
730.607083
UsageStatsManag
127.911920
11.011152 5010 /
RecordEventThre
248.339698
168.402070 0005 /
ConnectivitySer
417.196927 9088 /bg_non_interactive
935 .261191 /
UEventObserver
LazyTaskWriterT
411.879232
433.368077
7.805001 6046 /
WifiMonitor
Binder:1074_4
260 .975185 /
Binder:1074_5
616 .535833 /
387 .749468 /
Binder:1074_6
867 .918037 /
Binder:1074_7
814 .016466 /
Binder:1074_8
620 .025026 /
Binder:1074_9
199 .341669 /
Binder:1074_A
842 .127665 /
Binder:1074_B
846 .223656 /
Binder:1074_C
373 .297521 /
Binder:1074_D
225 .904481 /
Binder:1074_E
940 .950540 /
Binder:1074_F
330 .651269 /
Binder:1074_10
289 .745288 /
Binder:1074_11
759 .447294 /
Binder:1074_12
519 .182231 /
Binder:1074_13
751 .203377 /
Binder:1074_14
491 .297987 /
Binder:1074_15
438 .356113 /
Binder:1074_16
027 .439460 /
Binder:1074_17
531 .487662 /
Binder:1074_18
358 .311994 /
Binder:1074_19
578 .077220 /
Binder:1074_1A 12662 5154
743 4826 /
Binder:1074_1B 25851 3239
472 4987 /
pool-2-thread-1 16027
2.682615 /
Binder:1074_1C 27751 5331
Binder:1074_1D 13727 1973
Timer-27 26862 3102
0.000000 /
Binder:1074_1E 32757 0538
ndroid.systemui
948 .937512 /
ReferenceQueueD
FinalizerDaemon
FinalizerWatchd
HeapTaskDaemon
765 .497280 /
Binder:1400_1
Binder:1400_2
RenderThread
4.981075 1108 /
Binder:1400_4
Binder:1400_5
main_thread
259 .881126 /
930.462637
wpa_supplicant
067 .764675 /
HeapTaskDaemon
129.560234
157.358381 .526521 /
m.android.phone
233 .336678 /
Jit thread pool
209.406305 2602 /
FinalizerWatchd
957.194008 .536875 /
HeapTaskDaemon
Binder:1671_1
Binder:1671_2
RILSender0
865.875215 .782486 /
RILReceiver0
RILSender1
200.306774
39.500845 3432 /
RILReceiver1
81.521462 2672 /
GsmCellBroadcas
CdmaServiceCate
GsmCellBroadcas
Binder:1671_4
Binder:1671_5
Binder:1671_6
Binder:1671_7
Binder:1671_8
FinalizerWatchd
5.592614 .110165 /
HeapTaskDaemon
38.214304 .195515 /
FinalizerWatchd
5.621228 .884322 /
HeapTaskDaemon
11.575690 .112130 /
FinalizerWatchd
486.819523
243.113634 .269257 /
FinalizerWatchd
30.870218 .365013 /
HeapTaskDaemon
162.237079 .313051 /
RxIoScheduler-1
RxScheduledExec
RxScheduledExec
RxNewThreadSche 31106
7.879308 /
disp_queue_E3
376.258772
0.085077 /
FinalizerDaemon
FinalizerWatchd
HeapTaskDaemon
866 .156224 /
Binder:2062_1
Binder:2062_2
RxCachedWorkerP
RxComputationTh
RxComputationTh
nisdk-scheduler
318.169318
nisdk-report-1
56.420464 1914 /
Jit thread pool
27.625772 6098 /
FinalizerWatchd
111.016693
9.214770 .280458 /
HeapTaskDaemon
23.884536 .309658 /
110.398853 0431 /
GpuAppSpectator
696 .894172 /
FinalizerWatchd
26.516007 .706690 /
HeapTaskDaemon
474.503920
100.198778 .539432 /
FileObserver
1.227537 9096 /
UsageStatsManag
RecordEventThre
ReferenceQueueD
5.384996 .642796 /
FinalizerWatchd
3.982302 .538612 /
HeapTaskDaemon
11.243384 .833804 /
FinalizerWatchd
5.262695 .182605 /
HeapTaskDaemon
10.963231 .182722 /
Binder:2177_2
8.698081 0632 /
HeapTaskDaemon
9.787697 .484260 /
Binder:2194_2
8.721074 3718 /
FinalizerWatchd
4.252845 .196456 /
HeapTaskDaemon
10.947227 .396880 /
Binder:2211_2
8.661615 6252 /
ReferenceQueueD
29.001766 .333256 /
FinalizerDaemon
108.763538
154.227619 .439018 /
FinalizerWatchd
17.624764 .013866 /
HeapTaskDaemon
131.189233 .164415 /
Binder:2224_2
207.696612
271.060709 .768687 /
227.887298 6088 /
SystemStateMach
66.102156 4301 /
Binder:2224_6
329.767461
445.461836 5202 /
thread-pool-3
137.140383
22.439542 0915 /bg_non_interactive
FinalizerWatchd
4.873234 .009681 /
HeapTaskDaemon
14.216156 .217721 /
zu.monitorphone
Jit thread pool
185.753537
50.066694 3963 /
FinalizerWatchd
490.878603
294.966324 .130007 /
HeapTaskDaemon
943.774773 .758123 /
/bg_non_interactive
HeapTaskDaemon
.482442 /bg_non_interactive
Binder:2306_2
499.884461
356.031295 .044926 /
Profile Saver
/bg_non_interactive
Binder:2306_3
454.633548
345.553139 .102666 /
Binder:2306_4
426.434140
372.572249 .741113 /
Binder:2306_6
434.018464
328.658239 .735906 /
Binder:2306_8
518.737922
336.860931 .459100 /
Jit thread pool
306.059766
88.577768 3558 /
HeapTaskDaemon
PowerService
PowerBroadcastC
AppManagerThrea
788.123930
Binder:2370_3
754.020696 .272766 /
DataBuryManager
CalculateHandle
Binder:2370_4
746.937702 .488751 /
Binder:2370_5 17183 5877
642.641694
431.542241 8991 /
Jit thread pool
450.197538
250.721163 7107 /
Binder:2386_1
305.336614
458.842168 .755495 /
launcher-loader
480.978096
65.480924 /
/bg_non_interactive
FinalizerWatchd
235.679996 .499933 /bg_non_interactive
HeapTaskDaemon
502.375003 .367791 /bg_non_interactive
Binder:2453_1
InternalService
.911507 /bg_non_interactive
FinalizerDaemon
11.116998 .733171 /bg_non_interactive
FinalizerWatchd
5.060533 .459763 /bg_non_interactive
HeapTaskDaemon
33.866692 .109646 /bg_non_interactive
Binder:2471_1
10.477311 9631 /
/bg_non_interactive
ReferenceQueueD
444.815227
83.319386 .999695 /bg_non_interactive
FinalizerDaemon
977.744310
.018392 /bg_non_interactive
FinalizerWatchd
221.557460
43.858011 .182348 /bg_non_interactive
HeapTaskDaemon
.820717 /bg_non_interactive
Binder:2516_1
Binder:2516_2
ComputationThre
25.741844 8220 /bg_non_interactive
MmsSpamUtils
199.600079
51.753692 9919 /bg_non_interactive
ComputationThre
12.047308 2200 /bg_non_interactive
ComputationThre
12.513998 7951 /bg_non_interactive
Binder:2516_3
Binder:2516_4
928.597086
Binder:2516_5
952.652701
ComputationThre
30.131848 7218 /bg_non_interactive
ComputationThre 16641
126.310156
21.785538 0450 /bg_non_interactive
ComputationThre
1.312538 /bg_non_interactive
ComputationThre 16592
13.905769 /bg_non_interactive
ComputationThre
104.020385
2.720153 /bg_non_interactive
ReferenceQueueD
304.652145 .779735 /bg_non_interactive
FinalizerDaemon
546.505455 .947364 /bg_non_interactive
FinalizerWatchd
276.491403 .872981 /bg_non_interactive
UsageStatsManag
/bg_non_interactive
FinalizerWatchd
222.673691
8.677461 .091141 /bg_non_interactive
HeapTaskDaemon
188.704770
57.936768 .924643 /bg_non_interactive
Binder:2648_1
33.099161 2321 /
562.757540 /bg_non_interactive
pool-2-thread-1
196.756694
62.666078 9238 /bg_non_interactive
286.601932
428.353245 .142419 /
-&transport
25.443308 .988433 /
&-transport
28.124463 .021974 /
shell srvc 5914
0.000000 /
FinalizerDaemon
21.954766 .734781 /bg_non_interactive
FinalizerWatchd
6.623930 .747589 /bg_non_interactive
HeapTaskDaemon
323.734614
48.424845 .795789 /bg_non_interactive
Binder:3004_2
30.861542 5387 /
m.meizu.account
.367754 /bg_non_interactive
FinalizerWatchd
4.336692 .910201 /bg_non_interactive
HeapTaskDaemon
47.492309 .694240 /bg_non_interactive
FinalizerDaemon
108.651154
8.881619 .577842 /bg_non_interactive
FinalizerWatchd
5.157227 .395511 /bg_non_interactive
HeapTaskDaemon
127.383078
37.825307 .853779 /bg_non_interactive
Binder:3288_2
11.666309 8922 /
FinalizerWatchd
464.238694
7.161152 .517409 /
HeapTaskDaemon
159.171768
32.943850 .327216 /
IntentService[S
241.037999
28.528075 2628 /
FinalizerDaemon
71.100691 .929199 /bg_non_interactive
FinalizerWatchd
9.073075 .832326 /bg_non_interactive
HeapTaskDaemon
554.542687 .220905 /bg_non_interactive
UsageStats_Logg
86.775307 9916 /bg_non_interactive
pool-1-thread-1
47.764231 9144 /bg_non_interactive
Worker.Thread.A
9.863924 /bg_non_interactive
pool-10-thread-
8.515847 /bg_non_interactive
xy_update_pubin
/bg_non_interactive
MonitorThread
185.583844
3.911225 .373176 /bg_non_interactive
FinalizerDaemon
8.646614 .924190 /bg_non_interactive
FinalizerWatchd
4.674229 .153160 /bg_non_interactive
HeapTaskDaemon
35.682696 .435197 /bg_non_interactive
Binder:4182_1
12.786766 2117 /
izu.flyme.input
.755446 /bg_non_interactive
FinalizerWatchd
116.400768
16.009385 .903230 /bg_non_interactive
HeapTaskDaemon
526.631686
111.917009 .708576 /bg_non_interactive
Binder:4368_1
61.384469 0256 /
RecordEventThre
147.170854
35.100995 2467 /bg_non_interactive
mecommunication
.220537 /bg_non_interactive
FinalizerWatchd
17.089162 .173064 /bg_non_interactive
HeapTaskDaemon
373.878386
165.223309 .036569 /bg_non_interactive
Binder:4486_1
32.767545 .779553 /
Binder:4486_2
28.219616 .343563 /
FinalizerDaemon
14.496154 .895948 /bg_non_interactive
FinalizerWatchd
6.350847 .079377 /bg_non_interactive
HeapTaskDaemon
62.207307 .703651 /bg_non_interactive
Binder:4507_3
11.114385 2447 /
Jit thread pool
272.066846
122.068154 .410839 /bg_non_interactive
ReferenceQueueD
389.864691
17.409464 4815 /bg_non_interactive
FinalizerDaemon
136.802156
607.694072 7049 /bg_non_interactive
FinalizerWatchd
8.104925 4047 /bg_non_interactive
HeapTaskDaemon
321.725461
3157 /bg_non_interactive
Binder:4556_2
.908176 /bg_non_interactive
pool-2-thread-1
/bg_non_interactive
pool-3-thread-1
400.006363 .641973 /bg_non_interactive
Binder:4556_3 30310
345.959694
949 /bg_non_interactive
FinalizerDaemon
26.406768 .199635 /bg_non_interactive
FinalizerWatchd
12.901697 .698603 /bg_non_interactive
HeapTaskDaemon
464.528156
145.433234 .431796 /bg_non_interactive
UsageStatsManag
/bg_non_interactive
ReferenceQueueD
352.937851
44.359372 .525765 /bg_non_interactive
FinalizerWatchd
173.740456
27.154920 .606272 /bg_non_interactive
Binder:4667_4
399.673303
329.037221 .449134 /
Jit thread pool
584.051773
42.575534 6554 /bg_non_interactive
ReferenceQueueD
.415536 /bg_non_interactive
FinalizerWatchd
.124398 /bg_non_interactive
HeapTaskDaemon
.703211 /bg_non_interactive
Binder:4770_1
510 /bg_non_interactive
Picasso-Stats
197 .872074 /bg_non_interactive
Picasso-Dispatc
073 .297291 /bg_non_interactive
Picasso-refQueu
098 .485399 /bg_non_interactive
Auto Update Han
43.528536 2021 /bg_non_interactive
UpdateCheckerDb
3.827460 /bg_non_interactive
pool-5-thread-1
144.582386
83.838150 2065 /bg_non_interactive
checkThread
738 /bg_non_interactive
2.252384 /bg_non_interactive
109.628070
962 /bg_non_interactive
72.239310 /bg_non_interactive
UsageStats_Logg
.753318 /bg_non_interactive
StatsUploadThre
.136870 /bg_non_interactive
RenderThread
357.573920 .077223 /bg_non_interactive
ConditionReceiv
0.000000 /bg_non_interactive
ConditionReceiv
0.000000 /bg_non_interactive
ReferenceQueueD
247.946313
60.611858 .948260 /bg_non_interactive
FinalizerDaemon
439.513615
151.602623 .713423 /bg_non_interactive
FinalizerWatchd
175.652843
46.923088 .552312 /bg_non_interactive
HeapTaskDaemon
282.455313 .224809 /bg_non_interactive
Binder:5111_1
234.087695
534.092152 .064658 /
RecordEventThre
351.413308 /bg_non_interactive
ContactsProvide
355.248080
59.653076 /bg_non_interactive
ShadowCallLogPr
0.000000 /bg_non_interactive
Binder:5111_3
138.195767
338.813849 .928262 /
FinalizerDaemon
921.037418 .976093 /bg_non_interactive
FinalizerWatchd
259.089685 .585768 /bg_non_interactive
HeapTaskDaemon
747.696530 .126035 /bg_non_interactive
RenderThread
256.337921
364.432690 .914733 /bg_non_interactive
pool-1-thread-4
0.099539 /bg_non_interactive
FinalizerWatchd
246.169316 .694057 /bg_non_interactive
HeapTaskDaemon
607.713152 .386257 /bg_non_interactive
pool-3-thread-1
1.515308 /bg_non_interactive
1.101769 /bg_non_interactive
FinalizerWatchd
5.176692 .815724 /bg_non_interactive
HeapTaskDaemon
137.207768 .199150 /bg_non_interactive
Binder:6356_1
17.090851 7830 /
Profile Saver
652 /bg_non_interactive
ActivatePhone-E
.584097 /bg_non_interactive
pool-1-thread-1
907 .703501 /bg_non_interactive
u.flyme.weather
.250727 /bg_non_interactive
Jit thread pool
515.446693
21.770309 6569 /bg_non_interactive
FinalizerWatchd
540.873161
51.405691 .056423 /bg_non_interactive
HeapTaskDaemon
173.199539 .509549 /bg_non_interactive
Binder:8508_1
254.493609
357.255224 .014930 /
izu.filemanager 22235
0047 /bg_non_interactive
FinalizerWatchd 22245
176.189841 4104 /bg_non_interactive
HeapTaskDaemon 22246
396.577236 4959 /bg_non_interactive
Binder:22235_1 22247 5287
643.323002
972.536144 3593 /
Binder:22235_2 22249 1315
927.923146
945.608908 5242 /
Profile Saver 22254
/bg_non_interactive
RecordEventThre 22278
486.063540 /bg_non_interactive
netdiag 25003 0546
860 7324 /
tcpdump 25007 6576
955 3657 /
Jit thread pool 32200
531.165236
262.794842 0215 /bg_non_interactive
ReferenceQueueD 32203
735.110252 3534 /bg_non_interactive
FinalizerDaemon 32204
2561 /bg_non_interactive
Binder:32195_1 32207
965.093920
144.552158 3167 /bg_non_interactive
eServiceManager 32217
100.176846
/bg_non_interactive
load task queue 32225
0.055461 /bg_non_interactive
RxIoScheduler-1 32230
5590 /bg_non_interactive
ndHandlerThread 32233
0.000000 /bg_non_interactive
Timer-0 32239
2.807460 3286 /bg_non_interactive
ConnectivityThr 32255
0.000000 /bg_non_interactive
Binder:32195_C 21697
16.638927 0326 /bg_non_interactive
eizu.net.search
9555 /bg_non_interactive
FinalizerDaemon
619.698844 1186 /bg_non_interactive
FinalizerWatchd
6.240234 6537 /bg_non_interactive
HeapTaskDaemon
189.854542 1117 /bg_non_interactive
Binder:7374_1
24.918078 8462 /
RecordEventThre
52.350001 /bg_non_interactive
2.880846 /bg_non_interactive
xiaoyuan_taskqu
216.662232 /bg_non_interactive
Binder:7374_3
17.922618 2363 /
tcontactservice
6491 /bg_non_interactive
FinalizerWatchd
648.024688
124.142688 8763 /bg_non_interactive
HeapTaskDaemon
396.883075 5169 /bg_non_interactive
Binder:7438_1
231.238922
408.341308 0555 /
Profile Saver
/bg_non_interactive
StatsUploadThre
3273 /bg_non_interactive
kworker/u21:1
Signal Catcher 26897 1602
0.000000 /
FinalizerWatchd 26901 8038
HeapTaskDaemon 26902 9277
pool-1-thread-1 26907 2425
595.173231 /
Thread-2 26908 5599
4.382000 /
Thread-3 26909 1469
0.344539 /
Thread-5 26911 1469
0.000000 /
Thread-7 26913 8611
0.000000 /
Thread-8 26914 3219
102.416694
Thread-9 26915 0693
Thread-10 26916 8611
0.000000 /
Thread-11 26917 8089
0.000000 /
Thread-12 26918 8678
0.000000 /
ReferenceQueueD
559 /bg_non_interactive
FinalizerWatchd
912 /bg_non_interactive
HeapTaskDaemon
163.482307
876 /bg_non_interactive
Binder:4185_1
Binder:4185_2
Binder:4185_3
Profile Saver
/bg_non_interactive
AsyncQueryWorke
37.206691 /bg_non_interactive
RxScheduledExec
188.374771
114.464999
390 /bg_non_interactive
RxScheduledExec
294.389230
919 /bg_non_interactive
RxIoScheduler-1
100.250163
112.263457
083 /bg_non_interactive
RecordEventThre
133.367924 /bg_non_interactive
pool-3-thread-1
0.000000 /bg_non_interactive
kworker/0:3
kworker/u20:3
570.427698
306.502469
kworker/0:2
kworker/0:0
755.621222
kworker/u20:1
281.762769
133.692472
kworker/0:1
4.040308 /
cpu#1: Online
.nr_running
.nr_switches
.nr_load_updates
.nr_uninterruptible
.next_balance
.curr-&pid
.clock_task
.cpu_load[0]
.cpu_load[1]
.cpu_load[2]
.cpu_load[3]
.cpu_load[4]
.yld_count
.sched_count
.sched_goidle
.max_idle_balance_cost
.ttwu_count
.ttwu_local
cfs_rq[1]:/bg_non_interactive
.exec_clock
.MIN_vruntime
: 0.000001
.min_vruntime
.max_vruntime
: 0.000001
: 0.000000
.nr_spread_over
.nr_running
.runnable_load_avg
.removed_load_avg
.removed_util_avg
.tg_load_avg_contrib
.tg_load_avg
.se-&exec_start
.se-&vruntime
.se-&sum_exec_runtime
.se-&statistics.wait_start
: 0.000000
.se-&statistics.sleep_start
: 0.000000
.se-&statistics.block_start
: 0.000000
.se-&statistics.sleep_max
: 0.000000
.se-&statistics.block_max
: 0.000000
.se-&statistics.exec_max
: 268.577308
.se-&statistics.slice_max
: 158.383846
.se-&statistics.wait_max
: 449.603155
.se-&statistics.wait_sum
.se-&statistics.wait_count
.se-&load.weight
.se-&avg.load_avg
.se-&avg.util_avg
cfs_rq[1]:/
.exec_clock
.MIN_vruntime
: 0.000001
.min_vruntime
.max_vruntime
: 0.000001
: 0.000000
.nr_spread_over
.nr_running
.runnable_load_avg
.removed_load_avg
.removed_util_avg
.tg_load_avg_contrib
.tg_load_avg
rt_rq[1]:/bg_non_interactive
.rt_nr_running
.rt_throttled
: 0.000000
.rt_runtime
: 700.000000
rt_rq[1]:/
.rt_nr_running
.rt_throttled
: 0.240462
.rt_runtime
: 800.000000
.dl_nr_running
runnable tasks:
----------------------------------------------------------------------------------------------------------
rcu_preempt
433 .004465 /
migration/1
0.000000 /
ksoftirqd/1
kworker/1:0
028 .296110 /
kworker/1:0H
conn-md-thread
897.904397
265.639693 .155010 /
ion_mm_heap
880.444199
gpu_dvfs_host_r
11.329230 1171 /
kworker/1:1
543 .420206 /
present_fence_w
0.047924 /
ccci_ipc_3
sub_touch_suspe
186.599518
0.045385 /
5.653460 2210 /
cs43130_eint
300.244655
0.049693 /
312.671963
0.026615 /
ipi_cpu_dvfs_rt
886 .766225 /
321 0080 /
kworker/1:2
tspdrv_workqueu
354.950809
0.043307 /
charger_pe30
388.244648
0.044384 /
4.055385 /
irq/681-inv_irq
0.000000 /
kworker/1:1H
793 .967516 /
ext4-rsv-conver
539.640231
0.071000 /
ext4-rsv-conver
550.112265
0.189308 /
teei_daemon
116.936697
logd.reader
surfaceflinger
Binder:387_2
110.643873 4950 /
Dispatcher_0
227.688896
surfaceflinger
0.061846 /
POSIX timer 0
SceneThread
287.304917
6.417922 8202 /bg_non_interactive
Binder:387_3
958 .395202 /
848 .988102 /
mobile_log_d.rd 25016
251 5553 /
Binder:402_1
11.849771 1848 /
Binder:402_3
3.615307 0785 /
AALServiceMain
Binder:403_1
22.879080 1955 /
Binder:406_2
654.617231
OMXCallbackDisp
337.371155 /
POSIX timer 24
0.000000 /
POSIX timer 25
0.000000 /
POSIX timer 26
mtkFlpDaemon
nvram_agent_bin
836.205694
5.564923 /
mtk_stp_psm
535.902918 .070863 /
mtk_stp_btm
10.269154 /
audioserver
287 .446154 /
41.281532 9581 /
Binder:546_1
593 .559220 /
Binder:546_2
095 .363216 /
Binder:546_3
395 .780463 /
Binder:546_4
897 .898080 /
Binder:547_1
419.513922
NPDecoder-CL
929.629850
742.595078 .599231 /
120.681461 .556477 /
Binder:556_2
gatekeeperd
6.285386 /
stp_sdio_tx_rx
md1_rx0_worker
4.023307 /
md1_rx3_worker
0.053539 /
cldma_rxq3
0.102385 /
rx1_worker
4.110923 /
rx5_worker
3.744000 /
emdlogger3
3.626386 /
349.505615 /
38.994162 0026 /
35.829231 /
80.666316 9837 /
20.904234 /
368 .591619 /
655 .838331 /
44.858461 6841 /
9.196077 /
0.480538 /
509.634301
736.952397 .965842 /
Ril Proxy Main
843 .445670 /
Ril Proxy reque
StateThread
977.280146
398.396472
android.display
448 .149114 /
system_server
572.202625
563.199516 3784 /
NetdConnector
NetworkStats
NetworkPolicy
WifiService
notification-sq
8158 /bg_non_interactive
8.846612 1965 /
AudioService
110.230613
59.044003 9377 /
PhotonicModulat
204.420156
284.282093 7218 /
NetworkStatsObs
254.506925
215.775040 .342561 /
552.389622 .929843 /
MonitorThread
UsageStatsManag
8.958466 3147 /
137.238021
155.401375 4679 /
Binder:1400_6
hif_thread
Binder:1671_3
ReferenceQueueD
6.460153 .112964 /
Jit thread pool
367.794229
37.047156 0798 /
Binder:1717_2
384.757617
385.461928 .435974 /
Jit thread pool
37.296691 5096 /
ReferenceQueueD
131.093085
43.539385 .311879 /
FinalizerDaemon
131.639779 .041949 /
RxNewThreadSche
23.673923 /
Binder:1753_3
153.913072
127.704533 2480 /
RxNewThreadSche 10997
30.429232 /
RxNewThreadSche 21098
12.558923 /
RxNewThreadSche 31291
4.645384 /
e.systemuitools
708 .451683 /
ReferenceQueueD
12.229083 .673409 /
Binder:2077_1
235.546998
247.534690 7516 /
23.509616 6359 /
FinalizerDaemon
103.236384 .509959 /
Binder:2177_1
12.658083 .924710 /
FinalizerWatchd
3.638614 .050484 /
Binder:2194_1
12.871308 .741634 /
Binder:2224_1
119.847542
320.998538 8171 /
PowerStateThrea
Binder:2224_3
175.050225
233.801392 4321 /
ReferenceQueueD
5.933614 .906330 /
Binder:2240_1
33.440381 2944 /
Binder:2255_1
962.166530
Binder:2255_3
923.998699
Jit thread pool
636.509446
598.174388 1963 /bg_non_interactive
Binder:2306_1
455.434147
356.177158 .922354 /
downloadProvice
/bg_non_interactive
Binder:2306_5
415.198611
336.687553 .634104 /
Binder:2306_7
435.668924
510.319458 .970693 /
m.meizu.battery
651 .063264 /
Binder:2370_1
831.756778 .443211 /
UsageStatsManag
35.905766 .990434 /
RecordEventThre
405.771683
738.061080 .547050 /
Binder:2370_6 27759
591.378993
360.080085 9291 /
.flyme.launcher
Binder:2386_2
304.616536
434.117314 .078800 /
Binder:2386_3
447.393002
387.808844 .154068 /
UsageStatsManag
34.401921 4958 /
UsageStats_Logg
ndroid.location
.990406 /bg_non_interactive
Binder:2453_2
ConnectivityThr
0.000000 /bg_non_interactive
trafficPollingT
33.299768 5460 /bg_non_interactive
ComputationThre 26893
25.370769 /bg_non_interactive
ComputationThre 27152
107.045311
2.826846 /bg_non_interactive
u.mzsyncservice
.288716 /bg_non_interactive
HeapTaskDaemon
837.339138 .979135 /bg_non_interactive
Binder:2557_1
452.224756
Binder:2557_2
451.642318
UsageStats_Logg
517.626861 .628069 /bg_non_interactive
RecordEventThre
/bg_non_interactive
StatsUploadThre
829.768078 .462051 /bg_non_interactive
/bg_non_interactive
FinalizerDaemon
134.453234
29.832462 .586243 /bg_non_interactive
Profile Saver
/bg_non_interactive
FinalizerDaemon
130.746692
9.167776 .903995 /bg_non_interactive
Profile Saver
/bg_non_interactive
com.meizu.cloud
.479594 /bg_non_interactive
pool-1-thread-1
/bg_non_interactive
Jit thread pool
UsageStatsManag
16.404769 1613 /bg_non_interactive
RecordEventThre
21.220767 2840 /bg_non_interactive
xiaoyuan_taskqu
/bg_non_interactive
RenderThread
826.587074
404.358791 .970144 /bg_non_interactive
ReferenceQueueD
163.293619
26.327696 .025945 /bg_non_interactive
FinalizerDaemon
109.215692
59.261463 .739333 /bg_non_interactive
input_worker
112.201770
714.529155
/bg_non_interactive
FinalizerDaemon
29.538537 .530181 /bg_non_interactive
Binder:4507_2
13.387922 3719 /
Profile Saver
/bg_non_interactive
UsageStatsManag
14.708695 /bg_non_interactive
RecordEventThre
0.000000 /bg_non_interactive
u.net.pedometer
.372010 /bg_non_interactive
Binder:4556_1
.286509 /bg_non_interactive
pool-1-thread-1
231.954385
93.533460 8956 /bg_non_interactive
UsageStatsManag
56.545690 /bg_non_interactive
UsageStats_Logg
0.000000 /bg_non_interactive
RecordEventThre
0.000000 /bg_non_interactive
StatsUploadThre
701.243915 .555590 /bg_non_interactive
Jit thread pool
219.860997
259.771081 6606 /bg_non_interactive
FinalizerDaemon
357.755000
122.855144 .910303 /bg_non_interactive
HeapTaskDaemon
131.623924 .605067 /bg_non_interactive
Binder:4667_3
410.726773
385.035131 .597531 /
Binder:4667_5
154.926688
340.477836 .706264 /
Binder:4667_6
516.432305
419.259317 .149784 /
Binder:4667_7
310.832921
295.324386 .120593 /
FinalizerDaemon
.758367 /bg_non_interactive
Binder:4770_2
Profile Saver
178.477925
14.420152 7069 /bg_non_interactive
ConditionReceiv
.266414 /bg_non_interactive
48.157075 /bg_non_interactive
RecordEventThre
787 /bg_non_interactive
Binder:4770_3
2.019385 /bg_non_interactive
d.process.acore
.407823 /bg_non_interactive
Binder:5111_2
169.122607
325.324772 .511312 /
kworker/u21:0
14.185077 9958 /
Binder:5931_1
562.623919
337.453074 .554501 /
StatsUploadThre
.067921 /bg_non_interactive
Jit thread pool
152.545075
54.487619 1302 /bg_non_interactive
Profile Saver
/bg_non_interactive
kworker/1:3
317 .660702 /
Jit thread pool
61.345310 6243 /bg_non_interactive
ReferenceQueueD
559.598381
68.911077 .257233 /bg_non_interactive
Binder:8508_2
225.832153
331.340910 .836394 /
Profile Saver
/bg_non_interactive
pool-1-thread-1 22257
164.788613
236.237080
/bg_non_interactive
UsageStatsManag 22276
512.854541 /bg_non_interactive
UsageStats_Logg 22277
449.737925 /bg_non_interactive
kworker/1:4
530 1231 /
fe:MzSecService 32195
086 7313 /bg_non_interactive
FinalizerWatchd 32205
380.980634 2163 /bg_non_interactive
HeapTaskDaemon 32206
2971 /bg_non_interactive
Profile Saver 32209
388 /bg_non_interactive
pool-1-thread-1 32211
/bg_non_interactive
TMS_THREAD_POOL 32216
/bg_non_interactive
RxComputationSc 32231
3513 /bg_non_interactive
Binder:32195_3
178.859617
98.341914 1164 /bg_non_interactive
Binder:32195_5 11799
88.434609 3943 /bg_non_interactive
Binder:32195_6 11800
82.805016 3158 /bg_non_interactive
Binder:32195_7 16999
73.005323 2716 /bg_non_interactive
Binder:32195_8 21958
55.632616 3803 /
Binder:32195_9 32130
30.586010 0481 /bg_non_interactive
Binder:32195_A 11525
21.630611 8545 /bg_non_interactive
Binder:32195_B 11526
27.998465 2460 /bg_non_interactive
Binder:32195_D 31879
819 /bg_non_interactive
pool-1-thread-1
56.299231 /bg_non_interactive
StatsUploadThre
329.628774 6307 /bg_non_interactive
pool-10-thread-
21.836768 /bg_non_interactive
xiaoyuan-ipool2
72.681924 /bg_non_interactive
xy_update_pubin
/bg_non_interactive
Jit thread pool
416.071385
34.537771 3115 /bg_non_interactive
Binder:7438_2
359.527154
444.379243 2846 /
RecordEventThre
227.769233 /bg_non_interactive
pool-1-thread-1
117.316072
51.842235 0875 /bg_non_interactive
36.965692 /bg_non_interactive
iatek.mtklogger 26890
400.568619
554.115002
JDWP 26898
0.337077 /
Binder:26890_2 26904
Thread-4 26910
0.000000 /
Thread-6 26912
12.106539 /
calendar:remote
857 /bg_non_interactive
UsageStatsManag
180.499846 /bg_non_interactive
UsageStats_Logg
434.468075
782 /bg_non_interactive
StatsUploadThre
749.630460
139.289086
628 /bg_non_interactive
22.937154 /bg_non_interactive
185.528918
2.2.10、”/proc/schedstat” & “/proc/pid/schedstat”
我们可以通过”/proc/schedstat”读出cpu级别的一些调度统计,具体的代码实现在kernel/sched/stats.c show_schedstat()中:
# cat /proc/schedstat
version 15
cpu0 498206 0
domain0 003 5 5 0 0 0 0 0 5 0 0 0 0 0 0 0 0 7 7 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 14 1 0
domain1 113 5 5 0 0 0 0 0 5 0 0 0 0 0 0 0 0 7 7 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 17 0 0
cpu1 329113 0
domain0 003 4 4 0 0 0 0 1 3 0 0 0 0 0 0 0 0 4 3 0 2 1 0 2 1 0 0 0 0 0 0 0 0 0 9 3 0
domain1 113 4 4 0 0 0 0 0 1 0 0 0 0 0 0 0 0 3 3 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 7 0 0
cpu4 18835 0
5205662 8797513 2492988 37 8 7857723
domain0 113 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 1 3 2 1 201 0 0 0 2 1 0 1 0 0 0 0 0 0 8 7 0
cpu8 32417 0
4938475 9351290 2514217 88 6 7933881
domain0 113 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 7 8 0
可以通过”/proc/pid/schedstat”读出进程级别的调度统计,具体的代码在fs/proc/base.c proc_pid_schedstat()中:
# cat /proc/824/schedstat
5601999 20
/* task-&se.sum_exec_runtime, task-&sched_info.run_delay, task-&sched_info.pcount */
2.3、RT调度算法
分析完normal进程的cfs调度算法,我们再来看看rt进程(SCHED_RR/SCHED_FIFO)的调度算法。RT的调度算法改动很小,组织形式还是以前的链表数组,rq-&rt_rq.active.queue[MAX_RT_PRIO]包含100个(0-99)个数组链表用来存储runnable的rt线程。rt进程的调度通过rt_sched_class系列函数来实现。
SCHED_FIFO类型的rt进程调度比较简单,优先级最高的一直运行,直到主动放弃运行。
SCHED_RR类型的rt进程在相同优先级下进行时间片调度,每个时间片的时间长短可以通过sched_rr_timeslice变量来控制:
# cat /proc/sys/kernel/sched_rr_timeslice_ms
2.3.1、task_tick_rt()
scheduler_tick() -& task_tick_rt()
static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued)
struct sched_rt_entity *rt_se = &p-&
update_curr_rt(rq);
sched_rt_update_capacity_req(rq);
watchdog(rq, p);
if (p-&policy != SCHED_RR)
if (--p-&rt.time_slice)
p-&rt.time_slice = sched_rr_
for_each_sched_rt_entity(rt_se) {
if (rt_se-&run_list.prev != rt_se-&run_list.next) {
requeue_task_rt(rq, p, 0);
resched_curr(rq);
static void update_curr_rt(struct rq *rq)
struct task_struct *curr = rq-&
struct sched_rt_entity *rt_se = &curr-&
u64 delta_
int cpu = rq_cpu(rq);
#ifdef CONFIG_MTK_RT_THROTTLE_MON
struct rt_rq *cpu_rt_
u64 old_exec_
old_exec_start = curr-&se.exec_
if (curr-&sched_class != &rt_sched_class)
per_cpu(update_exec_start, rq-&cpu) = curr-&se.exec_
delta_exec = rq_clock_task(rq) - curr-&se.exec_
if (unlikely((s64)delta_exec &= 0))
schedstat_set(curr-&se.statistics.exec_max,
max(curr-&se.statistics.exec_max, delta_exec));
per_cpu(exec_task, cpu).pid = curr-&
per_cpu(exec_task, cpu).prio = curr-&
strncpy(per_cpu(exec_task, cpu).comm, curr-&comm, sizeof(per_cpu(exec_task, cpu).comm));
per_cpu(exec_delta_time, cpu) = delta_
per_cpu(clock_task, cpu) = rq-&clock_
per_cpu(exec_start, cpu) = curr-&se.exec_
curr-&se.sum_exec_runtime += delta_
account_group_exec_runtime(curr, delta_exec);
curr-&se.exec_start = rq_clock_task(rq);
cpuacct_charge(curr, delta_exec);
sched_rt_avg_update(rq, delta_exec);
per_cpu(sched_update_exec_start, rq-&cpu) = per_cpu(update_curr_exec_start, rq-&cpu);
per_cpu(update_curr_exec_start, rq-&cpu) = sched_clock_cpu(rq-&cpu);
if (!rt_bandwidth_enabled())
#ifdef CONFIG_MTK_RT_THROTTLE_MON
cpu_rt_rq = rt_rq_of_se(rt_se);
runtime = sched_rt_runtime(cpu_rt_rq);
if (cpu_rt_rq-&rt_time == 0 && !(cpu_rt_rq-&rt_throttled)) {
if (old_exec_start & per_cpu(rt_period_time, cpu) &&
(per_cpu(old_rt_time, cpu) + delta_exec) & runtime) {
save_mt_rt_mon_info(cpu, delta_exec, curr);
mt_rt_mon_switch(MON_STOP, cpu);
mt_rt_mon_print_task(cpu);
mt_rt_mon_switch(MON_RESET, cpu);
mt_rt_mon_switch(MON_START, cpu);
update_mt_rt_mon_start(cpu, delta_exec);
save_mt_rt_mon_info(cpu, delta_exec, curr);
for_each_sched_rt_entity(rt_se) {
struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
if (sched_rt_runtime(rt_rq) != RUNTIME_INF) {
raw_spin_lock(&rt_rq-&rt_runtime_lock);
rt_rq-&rt_time += delta_
if (sched_rt_runtime_exceeded(rt_rq))
resched_curr(rq);
raw_spin_unlock(&rt_rq-&rt_runtime_lock);
static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta)
rq-&rt_avg += rt_delta * arch_scale_freq_capacity(NULL, cpu_of(rq));
2.3.2、rq-&rt_avg
我们计算rq-&rt_avg(累加时间*freq_capacity),主要目的是给CPU_FREQ_GOV_SCHED使用。
CONFIG_CPU_FREQ_GOV_SCHED的主要思想是cfs和rt分别计算cpu_sched_capacity_reqs中的rt、cfs部分,在update_cpu_capacity_request()中综合cfs和rt的freq_capacity request,调用cpufreq框架调整一个合适的cpu频率。CPU_FREQ_GOV_SCHED是用来取代interactive governor的。
static inline void set_cfs_cpu_capacity(int cpu, bool request,
unsigned long capacity, int type)
#ifdef CONFIG_CPU_FREQ_SCHED_ASSIST
if (true) {
if (per_cpu(cpu_sched_capacity_reqs, cpu).cfs != capacity) {
per_cpu(cpu_sched_capacity_reqs

我要回帖

更多关于 51320是什么意思 的文章

 

随机推荐