Comment 4 for bug 1747069

Revision history for this message
Colin Ian King (colin-king) wrote :

Ignore my above comments. I bisected this again using a more reliable reproducer and found the first bad commit to be:
f1dd2cd13c4bbbc9a7c4617b3b034fa643de98fe is the first bad commit
commit f1dd2cd13c4bbbc9a7c4617b3b034fa643de98fe
Author: Michal Hocko <email address hidden>
Date: Thu Jul 6 15:38:11 2017 -0700

    mm, memory_hotplug: do not associate hotadded memory to zones until online

    The current memory hotplug implementation relies on having all the
    struct pages associate with a zone/node during the physical hotplug
    phase (arch_add_memory->__add_pages->__add_section->__add_zone). In the
    vast majority of cases this means that they are added to ZONE_NORMAL.
    This has been so since 9d99aaa31f59 ("[PATCH] x86_64: Support memory
    hotadd without sparsemem") and it wasn't a big deal back then because
    movable onlining didn't exist yet.

    Much later memory hotplug wanted to (ab)use ZONE_MOVABLE for movable
    onlining 511c2aba8f07 ("mm, memory-hotplug: dynamic configure movable
    memory and portion memory") and then things got more complicated.
    Rather than reconsidering the zone association which was no longer
    needed (because the memory hotplug already depended on SPARSEMEM) a
    convoluted semantic of zone shifting has been developed. Only the
    currently last memblock or the one adjacent to the zone_movable can be
    onlined movable. This essentially means that the online type changes as
    the new memblocks are added.

    Let's simulate memory hot online manually
      $ echo 0x100000000 > /sys/devices/system/memory/probe
      $ grep . /sys/devices/system/memory/memory32/valid_zones
      Normal Movable

      $ echo $((0x100000000+(128<<20))) > /sys/devices/system/memory/probe
      $ grep . /sys/devices/system/memory/memory3?/valid_zones
      /sys/devices/system/memory/memory32/valid_zones:Normal
      /sys/devices/system/memory/memory33/valid_zones:Normal Movable

      $ echo $((0x100000000+2*(128<<20))) > /sys/devices/system/memory/probe
      $ grep . /sys/devices/system/memory/memory3?/valid_zones
      /sys/devices/system/memory/memory32/valid_zones:Normal
      /sys/devices/system/memory/memory33/valid_zones:Normal
      /sys/devices/system/memory/memory34/valid_zones:Normal Movable

      $ echo online_movable > /sys/devices/system/memory/memory34/state
      $ grep . /sys/devices/system/memory/memory3?/valid_zones
      /sys/devices/system/memory/memory32/valid_zones:Normal
      /sys/devices/system/memory/memory33/valid_zones:Normal Movable
      /sys/devices/system/memory/memory34/valid_zones:Movable Normal

    This is an awkward semantic because an udev event is sent as soon as the
    block is onlined and an udev handler might want to online it based on
    some policy (e.g. association with a node) but it will inherently race
    with new blocks showing up.

    This patch changes the physical online phase to not associate pages with
    any zone at all. All the pages are just marked reserved and wait for
    the onlining phase to be associated with the zone as per the online
    request. There are only two requirements

            - existing ZONE_NORMAL and ZONE_MOVABLE cannot overlap

            - ZONE_NORMAL precedes ZONE_MOVABLE in physical addresses

    the latter one is not an inherent requirement and can be changed in the
    future. It preserves the current behavior and made the code slightly
    simpler. This is subject to change in future.

    This means that the same physical online steps as above will lead to the
    following state: Normal Movable

      /sys/devices/system/memory/memory32/valid_zones:Normal Movable
      /sys/devices/system/memory/memory33/valid_zones:Normal Movable

      /sys/devices/system/memory/memory32/valid_zones:Normal Movable
      /sys/devices/system/memory/memory33/valid_zones:Normal Movable
      /sys/devices/system/memory/memory34/valid_zones:Normal Movable

      /sys/devices/system/memory/memory32/valid_zones:Normal Movable
      /sys/devices/system/memory/memory33/valid_zones:Normal Movable
      /sys/devices/system/memory/memory34/valid_zones:Movable

    Implementation:
    The current move_pfn_range is reimplemented to check the above
    requirements (allow_online_pfn_range) and then updates the respective
    zone (move_pfn_range_to_zone), the pgdat and links all the pages in the
    pfn range with the zone/node. __add_pages is updated to not require the
    zone and only initializes sections in the range. This allowed to
    simplify the arch_add_memory code (s390 could get rid of quite some of
    code).

    devm_memremap_pages is the only user of arch_add_memory which relies on
    the zone association because it only hooks into the memory hotplug only
    half way. It uses it to associate the new memory with ZONE_DEVICE but
    doesn't allow it to be {on,off}lined via sysfs. This means that this
    particular code path has to call move_pfn_range_to_zone explicitly.

    The original zone shifting code is kept in place and will be removed in
    the follow up patch for an easier review.

    Please note that this patch also changes the original behavior when
    offlining a memory block adjacent to another zone (Normal vs. Movable)
    used to allow to change its movable type. This will be handled later.

    [<email address hidden>: simplify zone_intersects()]
      Link: http://<email address hidden>
    [<email address hidden>: remove duplicate call for set_page_links]
      Link: http://<email address hidden>
    [<email address hidden>: remove unused local `i']
    Link: http://<email address hidden>
    Signed-off-by: Michal Hocko <email address hidden>
    Signed-off-by: Wei Yang <email address hidden>
    Tested-by: Dan Williams <email address hidden>
    Tested-by: Reza Arbab <email address hidden>
    Acked-by: Heiko Carstens <email address hidden> # For s390 bits
    Acked-by: Vlastimil Babka <email address hidden>
    Cc: Martin Schwidefsky <email address hidden>
    Cc: Andi Kleen <email address hidden>
    Cc: Andrea Arcangeli <email address hidden>
    Cc: Balbir Singh <email address hidden>
    Cc: Daniel Kiper <email address hidden>
    Cc: David Rientjes <email address hidden>
    Cc: Igor Mammedov <email address hidden>
    Cc: Jerome Glisse <email address hidden>
    Cc: Joonsoo Kim <email address hidden>
    Cc: Mel Gorman <email address hidden>
    Cc: Tobias Regnery <email address hidden>
    Cc: Toshi Kani <email address hidden>
    Cc: Vitaly Kuznetsov <email address hidden>
    Cc: Xishi Qiu <email address hidden>
    Cc: Yasuaki Ishimatsu <email address hidden>
    Signed-off-by: Andrew Morton <email address hidden>
    Signed-off-by: Linus Torvalds <email address hidden>

:040000 040000 dca0ef52236584b5c8c259bd24af9e941acd8f61 16dc4a7bdc0e5f9e5689484814a7532c2a2a658f M arch
:040000 040000 c778bd5d4f71bb7eedeeee5cd41205f3ee8988d6 3a3cbdc1bded174f94557e0df36fa3a969494614 M drivers
:040000 040000 26cd3e2e3b96b9ba52e88d874c3a694aa165432a fffd98c095893c6d21be55c4b1b557622b247bcb M include
:040000 040000 c7b57745068ed37966fdb258f0e668963035d3da b881b885d3329f4b4adcd5260c823a771c33629a M kernel
:040000 040000 7ace9d1295af184e06a45164a0e056173080fa33 08ba75397576db653a4f3bb6d3ff7780bc01d561 M mm