Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit cec1e6e

Browse files
committed
Merge tag 'sched_ext-for-6.17-rc7-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext
Pull sched_ext fix from jun Heo: "This contains a fix for sched_ext idle CPU selection that likely fixes a substantial performance regression. The scx_bpf_select_cpu_dfl/and() kfuncs were incorrectly detecting all tasks as migration-disabled when called outside ops.select_cpu(), causing them to always return -EBUSY instead of finding idle CPUs. The fix properly distinguishes between genuinely migration-disabled tasks vs. the current task whose migration is temporarily disabled by BPF execution" * tag 'sched_ext-for-6.17-rc7-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext: sched_ext: idle: Handle migration-disabled tasks in BPF code
2 parents d4c7fcc + 55ed11b commit cec1e6e

File tree

1 file changed

+27
-1
lines changed

1 file changed

+27
-1
lines changed

kernel/sched/ext_idle.c

Lines changed: 27 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -856,6 +856,32 @@ static bool check_builtin_idle_enabled(void)
856856
return false;
857857
}
858858

859+
/*
860+
* Determine whether @p is a migration-disabled task in the context of BPF
861+
* code.
862+
*
863+
* We can't simply check whether @p->migration_disabled is set in a
864+
* sched_ext callback, because migration is always disabled for the current
865+
* task while running BPF code.
866+
*
867+
* The prolog (__bpf_prog_enter) and epilog (__bpf_prog_exit) respectively
868+
* disable and re-enable migration. For this reason, the current task
869+
* inside a sched_ext callback is always a migration-disabled task.
870+
*
871+
* Therefore, when @p->migration_disabled == 1, check whether @p is the
872+
* current task or not: if it is, then migration was not disabled before
873+
* entering the callback, otherwise migration was disabled.
874+
*
875+
* Returns true if @p is migration-disabled, false otherwise.
876+
*/
877+
static bool is_bpf_migration_disabled(const struct task_struct *p)
878+
{
879+
if (p->migration_disabled == 1)
880+
return p != current;
881+
else
882+
return p->migration_disabled;
883+
}
884+
859885
static s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
860886
const struct cpumask *allowed, u64 flags)
861887
{
@@ -898,7 +924,7 @@ static s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_f
898924
* selection optimizations and simply check whether the previously
899925
* used CPU is idle and within the allowed cpumask.
900926
*/
901-
if (p->nr_cpus_allowed == 1 || is_migration_disabled(p)) {
927+
if (p->nr_cpus_allowed == 1 || is_bpf_migration_disabled(p)) {
902928
if (cpumask_test_cpu(prev_cpu, allowed ?: p->cpus_ptr) &&
903929
scx_idle_test_and_clear_cpu(prev_cpu))
904930
cpu = prev_cpu;

0 commit comments

Comments
 (0)