On Tue, May 06, 2014 at 01:45:01AM +0200, Rafael J. Wysocki wrote: > On 5/6/2014 1:33 AM, Johannes Weiner wrote: > >Hi Oliver, > > > >On Mon, May 05, 2014 at 11:00:13PM +0200, Oliver Winker wrote: > >>Hello, > >> > >>1) Attached a full function-trace log + other SysRq outputs, see [1] > >>attached. > >> > >>I saw bdi_...() calls in the s2disk paths, but didn't check in detail > >>Probably more efficient when one of you guys looks directly. > >Thanks, this looks interesting. balance_dirty_pages() wakes up the > >bdi_wq workqueue as it should: > > > >[ 249.148009] s2disk-3327 2.... 48550413us : global_dirty_limits > <-balance_dirty_pages_ratelimited > >[ 249.148009] s2disk-3327 2.... 48550414us : global_dirtyable_memory > <-global_dirty_limits > >[ 249.148009] s2disk-3327 2.... 48550414us : writeback_in_progress > <-balance_dirty_pages_ratelimited > >[ 249.148009] s2disk-3327 2.... 48550414us : > bdi_start_background_writeback <-balance_dirty_pages_ratelimited > >[ 249.148009] s2disk-3327 2.... 48550414us : mod_delayed_work_on > <-balance_dirty_pages_ratelimited > >but the worker wakeup doesn't actually do anything: > >[ 249.148009] kworker/-3466 2d... 48550431us : finish_task_switch > <-__schedule > >[ 249.148009] kworker/-3466 2.... 48550431us : _raw_spin_lock_irq > <-worker_thread > >[ 249.148009] kworker/-3466 2d... 48550431us : need_to_create_worker > <-worker_thread > >[ 249.148009] kworker/-3466 2d... 48550432us : worker_enter_idle > <-worker_thread > >[ 249.148009] kworker/-3466 2d... 48550432us : too_many_workers > <-worker_enter_idle > >[ 249.148009] kworker/-3466 2.... 48550432us : schedule <-worker_thread > >[ 249.148009] kworker/-3466 2.... 48550432us : __schedule > <-worker_thread > > > >My suspicion is that this fails because the bdi_wq is frozen at this > >point and so the flush work never runs until resume, whereas before my > >patch the effective dirty limit was high enough so that image could be > >written in one go without being throttled; followed by an fsync() that > >then writes the pages in the context of the unfrozen s2disk. > > > >Does this make sense? Rafael? Tejun? > > Well, it does seem to make sense to me. From what I see, this is a deadlock in the userspace suspend model and just happened to work by chance in the past. Can we patch suspend-utils as follows? Alternatively, suspend-utils could clear the dirty limits before it starts writing and restore them post-resume. --- From 73d6546d5e264130e3d108c97d8317f86dc11149 Mon Sep 17 00:00:00 2001 From: Johannes Weiner