I captured a perf data file, starting just before the NMI kicked in - in this case, it again occurred on CPU 11. I found that CPU 11 had spent a lot of its time in cursor_timer_handler():
Looking at the profile of the other CPUs, I found one that was interesting - CPU #12 - which appears to be the one actually running the cursor update code CPU 11 is scheduling:
I wonder if this path is blocking, preventing the timer handler on CPU #11 from rescheduling. Indeed, when I time how long the handler takes from start to finish, I am seeing occasional times around ~.1s.
I captured a perf data file, starting just before the NMI kicked in - in this case, it again occurred on CPU 11. I found that CPU 11 had spent a lot of its time in cursor_ timer_handler( ):
- 16.92% swapper [kernel.kallsyms] [k] _raw_spin_ unlock_ irqrestore ▒ unlock_ irqrestore ▒ timer_handler ▒ timer_handler ▒ timer_handler ▒
- _raw_spin_
+ 16.87% mod_timer ▒
+ 0.05% cursor_
- 12.15% swapper [kernel.kallsyms] [k] queue_work_on ▒
- queue_work_on ▒
+ 12.00% cursor_
+ 0.15% call_timer_fn ▒
+ 10.98% swapper [kernel.kallsyms] [k] run_timer_softirq ▒
- 2.23% swapper [kernel.kallsyms] [k] mod_timer ▒
- mod_timer ▒
+ 1.97% cursor_
+ 0.26% call_timer_fn
Looking at the profile of the other CPUs, I found one that was interesting - CPU #12 - which appears to be the one actually running the cursor update code CPU 11 is scheduling:
- 42.18% kworker/u96:2 [kernel.kallsyms] [k] ww_mutex_unlock ▒
ast_ imageblit ▒
soft_ cursor ▒
bit_ cursor ▒
fb_ flashcursor ▒
process_ one_work ▒
worker_ thread ▒
ret_ from_fork ▒
- ww_mutex_unlock ▒
- 40.70% ast_dirty_update ▒
kthread ▒
+ 1.48% ast_imageblit ▒
- 40.15% kworker/u96:2 [kernel.kallsyms] [k] __memcpy_toio ▒
- __memcpy_toio ▒
+ 31.54% ast_dirty_update ▒
+ 8.61% ast_imageblit
I wonder if this path is blocking, preventing the timer handler on CPU #11 from rescheduling. Indeed, when I time how long the handler takes from start to finish, I am seeing occasional times around ~.1s.