Thank you Brin for your suggestion but I think the command did not work for me or maybe I did not do it well. Both the configuration of /etc/lvm/lvm.conf's 'devices section' and the verbose output of pvcreate command are provided below for your assistance. I have commented out other two filters that I have in that section before. I think maybe you will have an idea of what I need to do based of the attachment and the pvcreate output. # How LVM uses block devices. devices { # Configuration option devices/dir. # Directory in which to create volume group device nodes. # Commands also accept this as a prefix on volume group names. # This configuration option is advanced. dir = "/dev" # Configuration option devices/scan. # Directories containing device nodes to use with LVM. # This configuration option is advanced. scan = [ "/dev" ] # Configuration option devices/obtain_device_list_from_udev. # Obtain the list of available devices from udev. # This avoids opening or using any inapplicable non-block devices or # subdirectories found in the udev directory. Any device node or # symlink not managed by udev in the udev directory is ignored. This # setting applies only to the udev-managed device directory; other # directories will be scanned fully. LVM needs to be compiled with # udev support for this setting to apply. obtain_device_list_from_udev = 1 # Configuration option devices/external_device_info_source. # Select an external device information source. # Some information may already be available in the system and LVM can # use this information to determine the exact type or use of devices it # processes. Using an existing external device information source can # speed up device processing as LVM does not need to run its own native # routines to acquire this information. For example, this information # is used to drive LVM filtering like MD component detection, multipath # component detection, partition detection and others. # # Accepted values: # none # No external device information source is used. # udev # Reuse existing udev database records. Applicable only if LVM is # compiled with udev support. # external_device_info_source = "none" # Configuration option devices/preferred_names. # Select which path name to display for a block device. # If multiple path names exist for a block device, and LVM needs to # display a name for the device, the path names are matched against # each item in this list of regular expressions. The first match is # used. Try to avoid using undescriptive /dev/dm-N names, if present. # If no preferred name matches, or if preferred_names are not defined, # the following built-in preferences are applied in order until one # produces a preferred name: # Prefer names with path prefixes in the order of: # /dev/mapper, /dev/disk, /dev/dm-*, /dev/block. # Prefer the name with the least number of slashes. # Prefer a name that is a symlink. # Prefer the path with least value in lexicographical order. # # Example # preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ] # # This configuration option does not have a default value defined. # Configuration option devices/filter. # Limit the block devices that are used by LVM commands. # This is a list of regular expressions used to accept or reject block # device path names. Each regex is delimited by a vertical bar '|' # (or any character) and is preceded by 'a' to accept the path, or # by 'r' to reject the path. The first regex in the list to match the # path is used, producing the 'a' or 'r' result for the device. # When multiple path names exist for a block device, if any path name # matches an 'a' pattern before an 'r' pattern, then the device is # accepted. If all the path names match an 'r' pattern first, then the # device is rejected. Unmatching path names do not affect the accept # or reject decision. If no path names for a device match a pattern, # then the device is accepted. Be careful mixing 'a' and 'r' patterns, # as the combination might produce unexpected results (test changes.) # Run vgscan after changing the filter to regenerate the cache. # See the use_lvmetad comment for a special case regarding filters. # # Example # Accept every block device: # filter = [ "a|.*/|" ] # Reject the cdrom drive: # filter = [ "r|/dev/cdrom|" ] # Work with just loopback devices, e.g. for testing: # filter = [ "a|loop|", "r|.*|" ] # Accept all loop devices and ide drives except hdc: # filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ] # Use anchors to be very specific: # filter = [ "a|^/dev/hda8$|", "r|.*/|" ] #filter = [ "a/sda/", "r/.*/"] # # This configuration option has an automatic default value. # filter = [ "a|.*/|" ] #filter = [ "a/sda5/", "r/.*/"] # Configuration option devices/global_filter. # Limit the block devices that are used by LVM system components. # Because devices/filter may be overridden from the command line, it is # not suitable for system-wide device filtering, e.g. udev and lvmetad. # Use global_filter to hide devices from these LVM system components. # The syntax is the same as devices/filter. Devices rejected by # global_filter are not opened by LVM. # This configuration option has an automatic default value. # global_filter = [ "a|.*/|" ] global_filter = [ "a|.*/|", "a|sda5|", "r|.*|" ] # Configuration option devices/cache_dir. # Directory in which to store the device cache file. # The results of filtering are cached on disk to avoid rescanning dud # devices (which can take a very long time). By default this cache is # stored in a file named .cache. It is safe to delete this file; the # tools regenerate it. If obtain_device_list_from_udev is enabled, the # list of devices is obtained from udev and any existing .cache file # is removed. cache_dir = "/run/lvm" # Configuration option devices/cache_file_prefix. # A prefix used before the .cache file name. See devices/cache_dir. cache_file_prefix = "" # Configuration option devices/write_cache_state. # Enable/disable writing the cache file. See devices/cache_dir. write_cache_state = 1 # Configuration option devices/types. # List of additional acceptable block device types. # These are of device type names from /proc/devices, followed by the # maximum number of partitions. # # Example # types = [ "fd", 16 ] # # This configuration option is advanced. # This configuration option does not have a default value defined. # Configuration option devices/sysfs_scan. # Restrict device scanning to block devices appearing in sysfs. # This is a quick way of filtering out block devices that are not # present on the system. sysfs must be part of the kernel and mounted.) sysfs_scan = 1 # Configuration option devices/multipath_component_detection. # Ignore devices that are components of DM multipath devices. multipath_component_detection = 1 # Configuration option devices/md_component_detection. # Ignore devices that are components of software RAID (md) devices. md_component_detection = 1 # Configuration option devices/fw_raid_component_detection. # Ignore devices that are components of firmware RAID devices. # LVM must use an external_device_info_source other than none for this # detection to execute. fw_raid_component_detection = 0 # Configuration option devices/md_chunk_alignment. # Align PV data blocks with md device's stripe-width. # This applies if a PV is placed directly on an md device. md_chunk_alignment = 1 # Configuration option devices/default_data_alignment. # Default alignment of the start of a PV data area in MB. # If set to 0, a value of 64KiB will be used. # Set to 1 for 1MiB, 2 for 2MiB, etc. # This configuration option has an automatic default value. # default_data_alignment = 1 # Configuration option devices/data_alignment_detection. # Detect PV data alignment based on sysfs device information. # The start of a PV data area will be a multiple of minimum_io_size or # optimal_io_size exposed in sysfs. minimum_io_size is the smallest # request the device can perform without incurring a read-modify-write # penalty, e.g. MD chunk size. optimal_io_size is the device's # preferred unit of receiving I/O, e.g. MD stripe width. # minimum_io_size is used if optimal_io_size is undefined (0). # If md_chunk_alignment is enabled, that detects the optimal_io_size. # This setting takes precedence over md_chunk_alignment. data_alignment_detection = 1 # Configuration option devices/data_alignment. # Alignment of the start of a PV data area in KiB. # If a PV is placed directly on an md device and md_chunk_alignment or # data_alignment_detection are enabled, then this setting is ignored. # Otherwise, md_chunk_alignment and data_alignment_detection are # disabled if this is set. Set to 0 to use the default alignment or the # page size, if larger. data_alignment = 0 # Configuration option devices/data_alignment_offset_detection. # Detect PV data alignment offset based on sysfs device information. # The start of a PV aligned data area will be shifted by the # alignment_offset exposed in sysfs. This offset is often 0, but may # be non-zero. Certain 4KiB sector drives that compensate for windows # partitioning will have an alignment_offset of 3584 bytes (sector 7 # is the lowest aligned logical block, the 4KiB sectors start at # LBA -1, and consequently sector 63 is aligned on a 4KiB boundary). # pvcreate --dataalignmentoffset will skip this detection. data_alignment_offset_detection = 1 # Configuration option devices/ignore_suspended_devices. # Ignore DM devices that have I/O suspended while scanning devices. # Otherwise, LVM waits for a suspended device to become accessible. # This should only be needed in recovery situations. ignore_suspended_devices = 0 # Configuration option devices/ignore_lvm_mirrors. # Do not scan 'mirror' LVs to avoid possible deadlocks. # This avoids possible deadlocks when using the 'mirror' segment type. # This setting determines whether LVs using the 'mirror' segment type # are scanned for LVM labels. This affects the ability of mirrors to # be used as physical volumes. If this setting is enabled, it is # impossible to create VGs on top of mirror LVs, i.e. to stack VGs on # mirror LVs. If this setting is disabled, allowing mirror LVs to be # scanned, it may cause LVM processes and I/O to the mirror to become # blocked. This is due to the way that the mirror segment type handles # failures. In order for the hang to occur, an LVM command must be run # just after a failure and before the automatic LVM repair process # takes place, or there must be failures in multiple mirrors in the # same VG at the same time with write failures occurring moments before # a scan of the mirror's labels. The 'mirror' scanning problems do not # apply to LVM RAID types like 'raid1' which handle failures in a # different way, making them a better choice for VG stacking. ignore_lvm_mirrors = 1 # Configuration option devices/disable_after_error_count. # Number of I/O errors after which a device is skipped. # During each LVM operation, errors received from each device are # counted. If the counter of a device exceeds the limit set here, # no further I/O is sent to that device for the remainder of the # operation. Setting this to 0 disables the counters altogether. disable_after_error_count = 0 # Configuration option devices/require_restorefile_with_uuid. # Allow use of pvcreate --uuid without requiring --restorefile. require_restorefile_with_uuid = 1 # Configuration option devices/pv_min_size. # Minimum size in KiB of block devices which can be used as PVs. # In a clustered environment all nodes must use the same value. # Any value smaller than 512KiB is ignored. The previous built-in # value was 512. pv_min_size = 2048 # Configuration option devices/issue_discards. # Issue discards to PVs that are no longer used by an LV. # Discards are sent to an LV's underlying physical volumes when the LV # is no longer using the physical volumes' space, e.g. lvremove, # lvreduce. Discards inform the storage that a region is no longer # used. Storage that supports discards advertise the protocol-specific # way discards should be issued by the kernel (TRIM, UNMAP, or # WRITE SAME with UNMAP bit set). Not all storage will support or # benefit from discards, but SSDs and thinly provisioned LUNs # generally do. If enabled, discards will only be issued if both the # storage and kernel provide support. issue_discards = 1 } # Configuration section allocation. # How LVM selects space and applies properties to LVs. allocation { # Configuration option allocation/cling_tag_list. # Advise LVM which PVs to use when searching for new space. # When searching for free space to extend an LV, the 'cling' allocation # policy will choose space on the same PVs as the last segment of the # existing LV. If there is insufficient space and a list of tags is # defined here, it will check whether any of them are attached to the # PVs concerned and then seek to match those PV tags between existing # extents and new extents. # # Example # Use the special tag "@*" as a wildcard to match any PV tag: # cling_tag_list = [ "@*" ] # LVs are mirrored between two sites within a single VG, and # PVs are tagged with either @site1 or @site2 to indicate where # they are situated: # cling_tag_list = [ "@site1", "@site2" ] # # This configuration option does not have a default value defined. # Configuration option allocation/maximise_cling. # Use a previous allocation algorithm. # Changes made in version 2.02.85 extended the reach of the 'cling' # policies to detect more situations where data can be grouped onto # the same disks. This setting can be used to disable the changes # and revert to the previous algorithm. maximise_cling = 1 # Configuration option allocation/use_blkid_wiping. # Use blkid to detect existing signatures on new PVs and LVs. # The blkid library can detect more signatures than the native LVM # detection code, but may take longer. LVM needs to be compiled with # blkid wiping support for this setting to apply. LVM native detection # code is currently able to recognize: MD device signatures, # swap signature, and LUKS signatures. To see the list of signatures # recognized by blkid, check the output of the 'blkid -k' command. use_blkid_wiping = 1 # Configuration option allocation/wipe_signatures_when_zeroing_new_lvs. # Look for and erase any signatures while zeroing a new LV. # The --wipesignatures option overrides this setting. # Zeroing is controlled by the -Z/--zero option, and if not specified, # zeroing is used by default if possible. Zeroing simply overwrites the # first 4KiB of a new LV with zeroes and does no signature detection or # wiping. Signature wiping goes beyond zeroing and detects exact types # and positions of signatures within the whole LV. It provides a # cleaner LV after creation as all known signatures are wiped. The LV # is not claimed incorrectly by other tools because of old signatures # from previous use. The number of signatures that LVM can detect # depends on the detection code that is selected (see # use_blkid_wiping.) Wiping each detected signature must be confirmed. # When this setting is disabled, signatures on new LVs are not detected # or erased unless the --wipesignatures option is used directly. wipe_signatures_when_zeroing_new_lvs = 1 # Configuration option allocation/mirror_logs_require_separate_pvs. # Mirror logs and images will always use different PVs. # The default setting changed in version 2.02.85. mirror_logs_require_separate_pvs = 0 # Configuration option allocation/cache_pool_metadata_require_separate_pvs. # Cache pool metadata and data will always use different PVs. cache_pool_metadata_require_separate_pvs = 0 # Configuration option allocation/cache_mode. # The default cache mode used for new cache. # # Accepted values: # writethrough # Data blocks are immediately written from the cache to disk. # writeback # Data blocks are written from the cache back to disk after some # delay to improve performance. # # This setting replaces allocation/cache_pool_cachemode. # This configuration option has an automatic default value. # cache_mode = "writethrough" # Configuration option allocation/cache_policy. # The default cache policy used for new cache volume. # Since kernel 4.2 the default policy is smq (Stochastic multique), # otherwise the older mq (Multiqueue) policy is selected. # This configuration option does not have a default value defined. # Configuration section allocation/cache_settings. # Settings for the cache policy. # See documentation for individual cache policies for more info. # This configuration section has an automatic default value. # cache_settings { # } # Configuration option allocation/cache_pool_chunk_size. # The minimal chunk size in KiB for cache pool volumes. # Using a chunk_size that is too large can result in wasteful use of # the cache, where small reads and writes can cause large sections of # an LV to be mapped into the cache. However, choosing a chunk_size # that is too small can result in more overhead trying to manage the # numerous chunks that become mapped into the cache. The former is # more of a problem than the latter in most cases, so the default is # on the smaller end of the spectrum. Supported values range from # 32KiB to 1GiB in multiples of 32. # This configuration option does not have a default value defined. # Configuration option allocation/thin_pool_metadata_require_separate_pvs. # Thin pool metdata and data will always use different PVs. thin_pool_metadata_require_separate_pvs = 0 # Configuration option allocation/thin_pool_zero. # Thin pool data chunks are zeroed before they are first used. # Zeroing with a larger thin pool chunk size reduces performance. # This configuration option has an automatic default value. # thin_pool_zero = 1 # Configuration option allocation/thin_pool_discards. # The discards behaviour of thin pool volumes. # # Accepted values: # ignore # nopassdown # passdown # # This configuration option has an automatic default value. # thin_pool_discards = "passdown" # Configuration option allocation/thin_pool_chunk_size_policy. # The chunk size calculation policy for thin pool volumes. # # Accepted values: # generic # If thin_pool_chunk_size is defined, use it. Otherwise, calculate # the chunk size based on estimation and device hints exposed in # sysfs - the minimum_io_size. The chunk size is always at least # 64KiB. # performance # If thin_pool_chunk_size is defined, use it. Otherwise, calculate # the chunk size for performance based on device hints exposed in # sysfs - the optimal_io_size. The chunk size is always at least # 512KiB. # # This configuration option has an automatic default value. # thin_pool_chunk_size_policy = "generic" # Configuration option allocation/thin_pool_chunk_size. # The minimal chunk size in KiB for thin pool volumes. # Larger chunk sizes may improve performance for plain thin volumes, # however using them for snapshot volumes is less efficient, as it # consumes more space and takes extra time for copying. When unset, # lvm tries to estimate chunk size starting from 64KiB. Supported # values are in the range 64KiB to 1GiB. # This configuration option does not have a default value defined. # Configuration option allocation/physical_extent_size. # Default physical extent size in KiB to use for new VGs. # This configuration option has an automatic default value. # physical_extent_size = 4096 } # Configuration section log. pvcreate verbose output root@blockstorage# pvcreate -ff -vvv /dev/sda5 WARNING: Ignoring duplicate config value: filter DEGRADED MODE. Incomplete RAID LVs will be processed. Setting activation/monitoring to 1 Processing: pvcreate -ff -vvv /dev/sda5 system ID: O_DIRECT will be used Setting global/locking_type to 1 Setting global/wait_for_locks to 1 File-based locking selected. Setting global/prioritise_write_locks to 1 Setting global/locking_dir to /run/lock/lvm Setting global/use_lvmlockd to 0 metadata/pvmetadataignore not found in config: defaulting to 0 metadata/pvmetadatasize not found in config: defaulting to 255 metadata/pvmetadatacopies not found in config: defaulting to 1 Locking /run/lock/lvm/P_orphans WB _do_flock /run/lock/lvm/P_orphans:aux WB _do_flock /run/lock/lvm/P_orphans WB _undo_flock /run/lock/lvm/P_orphans:aux Metadata cache has no info for vgname: "#orphans" Asking lvmetad for complete list of known PVs Setting response to OK Setting response to OK Setting id to 2fIKWj-ob3c-GJM7-5Twb-tsoY-QgnR-Wn3O6u Setting vgid to UH8xpW-biA4-FleR-H9ib-aLAw-JiP2-3ciFXI Setting vgname to blockstorage-vg Setting format to lvm2 Setting device to 2053 Setting dev_size to 999139835904 Setting label_sector to 1 /dev/sda: Added to device cache (8:0) /dev/disk/by-id/scsi-361866da052d1b6001ec35c8e11148c9c: Aliased to /dev/sda in device cache (8:0) /dev/disk/by-id/wwn-0x61866da052d1b6001ec35c8e11148c9c: Aliased to /dev/sda in device cache (8:0) /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0: Aliased to /dev/sda in device cache (8:0) /dev/sda1: Added to device cache (8:1) /dev/disk/by-id/scsi-361866da052d1b6001ec35c8e11148c9c-part1: Aliased to /dev/sda1 in device cache (8:1) /dev/disk/by-id/wwn-0x61866da052d1b6001ec35c8e11148c9c-part1: Aliased to /dev/sda1 in device cache (8:1) /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0-part1: Aliased to /dev/sda1 in device cache (8:1) /dev/disk/by-uuid/b19befce-5523-4176-b681-e7f2373d2b71: Aliased to /dev/sda1 in device cache (8:1) /dev/sda2: Added to device cache (8:2) /dev/disk/by-id/scsi-361866da052d1b6001ec35c8e11148c9c-part2: Aliased to /dev/sda2 in device cache (8:2) /dev/disk/by-id/wwn-0x61866da052d1b6001ec35c8e11148c9c-part2: Aliased to /dev/sda2 in device cache (8:2) /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0-part2: Aliased to /dev/sda2 in device cache (8:2) /dev/sda5: Added to device cache (8:5) /dev/disk/by-id/lvm-pv-uuid-2fIKWj-ob3c-GJM7-5Twb-tsoY-QgnR-Wn3O6u: Aliased to /dev/sda5 in device cache (8:5) /dev/disk/by-id/scsi-361866da052d1b6001ec35c8e11148c9c-part5: Aliased to /dev/sda5 in device cache (8:5) /dev/disk/by-id/wwn-0x61866da052d1b6001ec35c8e11148c9c-part5: Aliased to /dev/sda5 in device cache (8:5) /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0-part5: Aliased to /dev/sda5 in device cache (8:5) /dev/sr0: Added to device cache (11:0) /dev/cdrom: Aliased to /dev/sr0 in device cache (preferred name) (11:0) /dev/cdrw: Aliased to /dev/cdrom in device cache (11:0) /dev/disk/by-id/ata-HL-DT-ST_DVD+_-RW_GTA0N_KMQG39D4341: Aliased to /dev/cdrom in device cache (11:0) /dev/disk/by-id/wwn-0x5001480000000000: Aliased to /dev/cdrom in device cache (11:0) /dev/disk/by-path/pci-0000:00:1f.2-ata-5: Aliased to /dev/cdrom in device cache (11:0) /dev/dvd: Aliased to /dev/cdrom in device cache (11:0) /dev/dvdrw: Aliased to /dev/cdrom in device cache (11:0) /dev/loop0: Added to device cache (7:0) /dev/loop1: Added to device cache (7:1) /dev/loop2: Added to device cache (7:2) /dev/loop3: Added to device cache (7:3) /dev/loop4: Added to device cache (7:4) /dev/loop5: Added to device cache (7:5) /dev/loop6: Added to device cache (7:6) /dev/loop7: Added to device cache (7:7) /dev/ram0: Added to device cache (1:0) /dev/ram1: Added to device cache (1:1) /dev/ram10: Added to device cache (1:10) /dev/ram11: Added to device cache (1:11) /dev/ram12: Added to device cache (1:12) /dev/ram13: Added to device cache (1:13) /dev/ram14: Added to device cache (1:14) /dev/ram15: Added to device cache (1:15) /dev/ram2: Added to device cache (1:2) /dev/ram3: Added to device cache (1:3) /dev/ram4: Added to device cache (1:4) /dev/ram5: Added to device cache (1:5) /dev/ram6: Added to device cache (1:6) /dev/ram7: Added to device cache (1:7) /dev/ram8: Added to device cache (1:8) /dev/ram9: Added to device cache (1:9) /dev/dm-0: Added to device cache (252:0) /dev/blockstorage-vg/root: Aliased to /dev/dm-0 in device cache (preferred name) (252:0) /dev/disk/by-id/dm-name-blockstorage--vg-root: Aliased to /dev/blockstorage-vg/root in device cache (252:0) /dev/disk/by-id/dm-uuid-LVM-UH8xpWbiA4FleRH9ibaLAwJiP23ciFXILBn7QRH8NfHTBuQsWlb4zLPD8iuA5ubv: Aliased to /dev/blockstorage-vg/root in device cache (252:0) /dev/disk/by-uuid/bd2f92aa-80ae-4a12-aedc-e2a22ab18b76: Aliased to /dev/blockstorage-vg/root in device cache (252:0) /dev/mapper/blockstorage--vg-root: Aliased to /dev/blockstorage-vg/root in device cache (252:0) /dev/dm-1: Added to device cache (252:1) /dev/blockstorage-vg/swap_1: Aliased to /dev/dm-1 in device cache (preferred name) (252:1) /dev/disk/by-id/dm-name-blockstorage--vg-swap_1: Aliased to /dev/blockstorage-vg/swap_1 in device cache (252:1) /dev/disk/by-id/dm-uuid-LVM-UH8xpWbiA4FleRH9ibaLAwJiP23ciFXIHgmIGU3l8fF3dLBOl9chK7NvyYy03dEp: Aliased to /dev/blockstorage-vg/swap_1 in device cache (252:1) /dev/disk/by-uuid/7345a368-3329-426d-b6be-9af46a933fe1: Aliased to /dev/blockstorage-vg/swap_1 in device cache (252:1) /dev/mapper/blockstorage--vg-swap_1: Aliased to /dev/blockstorage-vg/swap_1 in device cache (252:1) Metadata cache has no info for vgname: "blockstorage-vg" Metadata cache has no info for vgname: "blockstorage-vg" lvmcache: /dev/sda5: now in VG blockstorage-vg with 0 mdas lvmcache: /dev/sda5: setting blockstorage-vg VGID to UH8xpWbiA4FleRH9ibaLAwJiP23ciFXI Setting size to 1044480 Setting start to 4096 Setting ignore to 0 Using /dev/sda5 Asking lvmetad for VG UH8xpW-biA4-FleR-H9ib-aLAw-JiP2-3ciFXI (blockstorage-vg) Setting response to OK Setting response to OK Setting name to blockstorage-vg Setting metadata/format to lvm2 Setting id to 2fIKWj-ob3c-GJM7-5Twb-tsoY-QgnR-Wn3O6u Setting format to lvm2 Setting device to 2053 Setting dev_size to 1951444992 Setting label_sector to 1 Found same device /dev/sda5 with same pvid 2fIKWjob3cGJM75TwbtsoYQgnRWn3O6u lvmcache: /dev/sda5: now in VG #orphans_lvm2 (#orphans_lvm2) with 1 mdas Setting size to 1044480 Setting start to 4096 Setting ignore to 0 Allocated VG blockstorage-vg at 0x5637a2ca72b0. Metadata cache has no info for vgname: "blockstorage-vg" Metadata cache has no info for vgname: "blockstorage-vg" lvmcache: /dev/sda5: now in VG blockstorage-vg with 1 mdas lvmcache: /dev/sda5: setting blockstorage-vg VGID to UH8xpWbiA4FleRH9ibaLAwJiP23ciFXI Setting response to OK Setting response to OK /dev/sda5 0: 0 230023: root(0:0) /dev/sda5 1: 230023 8180: swap_1(0:0) /dev/sda5 2: 238203 10: NULL(0:0) Freeing VG blockstorage-vg at 0x5637a2ca72b0. Really INITIALIZE physical volume "/dev/sda5" of volume group "blockstorage-vg" [y/n]? y Opened /dev/sda5 RO O_DIRECT /dev/sda5: size is 1951444992 sectors Closed /dev/sda5 /dev/sda5: Device is a partition, using primary device sda for mpath component detection Opened /dev/sda5 RO O_DIRECT Closed /dev/sda5 /dev/sda5: size is 1951444992 sectors Opened /dev/sda5 RO O_DIRECT /dev/sda5: block size is 4096 bytes /dev/sda5: physical block size is 512 bytes Closed /dev/sda5 Using /dev/sda5 /dev/sda5: open failed: Device or resource busy Can't open /dev/sda5 exclusively. Mounted filesystem? Unlocking /run/lock/lvm/P_orphans _undo_flock /run/lock/lvm/P_orphans Metadata cache has no info for vgname: "#orphans" Completed: pvcreate -ff -vvv /dev/sda5