with jaunty kernel 2.6.28-10, the raid is no longer assembled at all. The array can be created with mdadm --create --assume-clean, as before, apparently without any problems. this sounds okay if nothing has been written to the array yet, and since it isn't assembled, this should be fine. Unless the problem is in shutting down the array, but no segfault there, so probably not.
No joy with 2.6.28-11, although there are three md raid10 fixes, relating to recovery which I hoped might somehow impinge:
from upstream short changelog:
md: avoid races when stopping resync.
* md/raid10: Don't call bitmap_cond_end_sync when we are doing recovery.
* md/raid10: Don't skip more than 1 bitmap-chunk at a time during
recovery.
this seems like a serious bug to me. if you are suffering, please add your comments; if it is just me and the reporter, please can someone help me work out why? N.b. a fedora kernel with ubuntu starts the array correctly, so this is a kernel or mdadm bug.
with jaunty kernel 2.6.28-10, the raid is no longer assembled at all. The array can be created with mdadm --create --assume-clean, as before, apparently without any problems. this sounds okay if nothing has been written to the array yet, and since it isn't assembled, this should be fine. Unless the problem is in shutting down the array, but no segfault there, so probably not.
No joy with 2.6.28-11, although there are three md raid10 fixes, relating to recovery which I hoped might somehow impinge: cond_end_ sync when we are doing recovery.
from upstream short changelog:
md: avoid races when stopping resync.
* md/raid10: Don't call bitmap_
* md/raid10: Don't skip more than 1 bitmap-chunk at a time during
recovery.
this seems like a serious bug to me. if you are suffering, please add your comments; if it is just me and the reporter, please can someone help me work out why? N.b. a fedora kernel with ubuntu starts the array correctly, so this is a kernel or mdadm bug.