Tag: pvs

Linux LVM: File-based locking initialisation failed

Running a lvm command and you encounter the message File-based locking initialisation failed means that the filesystem is read-only

root@linux:~ # pvs
File-based locking initialisation failed

The directory /var/lock/lvm must be writable. So you have a /var or / filesystem read-only.

You can try to remount it as read-write or reboot the server

mount -o remount,rw /
reboot -d -n -f

Source: Running an LVM command returns “File-based locking initialisation failed” or “Locking type 1 initialisation failed”

Linux LVM: Device /dev/mapper/ECP_data_disk_001 has size of 1048576000 sectors which is smaller than corresponding PV size of 1174372352 sectors. Was device resized?

Running LVM command and it shows an unrelated disk that is complaining it has a unexpected size.

This problem was happening because it is a cluster node and LUN was resized.

On the node that showed the message, I didn’t run any commands to resize it

root@linux02:~ # pvs | grep PP0_data_disk_001
Device /dev/mapper/ECP_data_disk_001 has size of 1048576000 sectors which is smaller than corresponding PV size of 1174372352 sectors. Was device resized?

Run the commands to resize the LUN on the node

root@linux02:~ # multipath -ll ECP_data_disk_001 | grep ready | awk ‘{print “echo 1 > /sys/block/”$3″/device/rescan”‘} | bash
root@linux02:~ # multipathd -k’resize map ECP_data_disk_001′
ok
root@linux02:~ # pvresize /dev/mapper/ECP_data_disk_001
Physical volume “/dev/mapper/ECP_data_disk_001” changed
1 physical volume(s) resized / 0 physical volume(s) not resized

UXMON: SY1_log2_disk_001 – Only one path detected, no path redundancy

Also see:
UXMON: mpathb – Only one path detected, no path redundancy
UXMON: volumegroup – Only one path detected, no path redundancy

ATTENTION, RMC LEVEL 1 AGENT: This ticket will be automatically worked by the Automation Bus. Pls. ensure your Ticket List/View includes the “Assignee” column, monitor this ticket until the user “ABOPERATOR” is no longer assigned, BEFORE you start work on this ticket.
Node : linux.setaoffice.com
Node Type : Intel/AMD x64(HTTPS)
Severity : major
OM Server Time: 2017-12-11 03:28:31
Message : UXMON: SY1_log2_disk_001 – Only one path detected, no path redundancy
Msg Group : OS
Application : mpmon
Object : mp
Event Type :
not_found

Instance Name :
not_found

Instruction : The multipathd -k”show map $device topology” command shows more details

Please check /var/opt/OV/log/OpC/mp_mon.log for more details
EventDataSource :

When running a LVM command it is showing that several PVs are duplicate.

root@linux:~ # pvs
Found duplicate PV s3zEE42awIhydJ05hfUzJulPsN8WS266: using /dev/mapper/SY1_disknew_001 not /dev/sdbm
Using duplicate PV /dev/mapper/SY1_disknew_001 from subsystem DM, replacing /dev/sdbm
Found duplicate PV zTZlnwYgW69xzUcTi0riu5euTPiWWnRs: using /dev/mapper/swap_disk_001p1 not /dev/sdbo1
Using duplicate PV /dev/mapper/swap_disk_001p1 from subsystem DM, ignoring /dev/sdbo1
Found duplicate PV hnuWyziFeXRhJ71YpLJX1cLAjOvgtl0q: using /dev/mapper/SCR_DATA_disk_001p1 not /dev/sdbn1
Using duplicate PV /dev/mapper/SCR_DATA_disk_001p1 from subsystem DM, replacing /dev/sdbn1
Found duplicate PV EeqL0LDdMohfnagni16NlRUig3eugbap: using /dev/mapper/SY1_log2_disk_001p1 not /dev/sdbp1
Using duplicate PV /dev/mapper/SY1_log2_disk_001p1 from subsystem DM, ignoring /dev/sdbp1
Found duplicate PV sMAYQABFedTStD589d5ZcDu1ZNtXfXyh: using /dev/mapper/SY1_log1_disk_001p1 not /dev/sdbq1
Using duplicate PV /dev/mapper/SY1_log1_disk_001p1 from subsystem DM, ignoring /dev/sdbq1
Found duplicate PV qFBDDPa7FtN7F97tpUDeKv0cBO94WBr3: using /dev/mapper/SCR_disk_002 not /dev/sdbr
Using duplicate PV /dev/mapper/SCR_disk_002 from subsystem DM, ignoring /dev/sdbr
Found duplicate PV RZvwCnQn48G2A9IifNOLZT9l9ZYdE7yu: using /dev/mapper/SY1_arch_disk_001p1 not /dev/sdbs1
Using duplicate PV /dev/mapper/SY1_arch_disk_001p1 from subsystem DM, ignoring /dev/sdbs1
Found duplicate PV qWPh0v3E1ADowxUuuC8kP8BFJ1bwBsmL: using /dev/mapper/SY1_disk_001p1 not /dev/sdbt1
Using duplicate PV /dev/mapper/SY1_disk_001p1 from subsystem DM, ignoring /dev/sdbt1
Found duplicate PV hi5F4g9FPojaD9J7KH5vHCzRKgd4jQfJ: using /dev/mapper/MR2_log2_disk_001p1 not /dev/sdbu1
Using duplicate PV /dev/mapper/MR2_log2_disk_001p1 from subsystem DM, ignoring /dev/sdbu1
Found duplicate PV KA1AkbCOgfHFGuKazmHAacwqBNPmEyV8: using /dev/mapper/MR2_log1_disk_001p1 not /dev/sdbv1
Using duplicate PV /dev/mapper/MR2_log1_disk_001p1 from subsystem DM, ignoring /dev/sdbv1
Found duplicate PV dy0YaJoSrlZdTM9isfY1QGS6kWBTzs6i: using /dev/mapper/MR2_data_disk_001p1 not /dev/sdbw1
Using duplicate PV /dev/mapper/MR2_data_disk_001p1 from subsystem DM, ignoring /dev/sdbw1
Found duplicate PV khSEnsJ9CC4epw0uOAsaFwlbIKS32Qyj: using /dev/mapper/MR2_arch_disk_001p1 not /dev/sdbx1
Using duplicate PV /dev/mapper/MR2_arch_disk_001p1 from subsystem DM, ignoring /dev/sdbx1
Found duplicate PV PmTTDGWvEvDM8AMUHvqcoRJeA9g4r78D: using /dev/mapper/MRC_log2_disk_001p1 not /dev/sdbi1
Using duplicate PV /dev/mapper/MRC_log2_disk_001p1 from subsystem DM, ignoring /dev/sdbi1
Found duplicate PV ffv1TgDxRIEIyhRoEh46fMGeFuJ1tVtv: using /dev/mapper/MR2_disk_001p1 not /dev/sdby1
Using duplicate PV /dev/mapper/MR2_disk_001p1 from subsystem DM, ignoring /dev/sdby1
Found duplicate PV tCeMdOB1dFuJomhgB061M7MeZNEwptk3: using /dev/mapper/MRC_log1_disk_001p1 not /dev/sdbj1
Using duplicate PV /dev/mapper/MRC_log1_disk_001p1 from subsystem DM, ignoring /dev/sdbj1
Found duplicate PV inuuS3f39VjSuh6q7r5pEZSjzRgRhi5e: using /dev/mapper/sap_disk_001p1 not /dev/sdbz1
Using duplicate PV /dev/mapper/sap_disk_001p1 from subsystem DM, ignoring /dev/sdbz1
Found duplicate PV Ea4CVYCMjYc3eE129uYlUlZKdzhmYNpQ: using /dev/mapper/MRC_data_disk_001p1 not /dev/sdbk1
Using duplicate PV /dev/mapper/MRC_data_disk_001p1 from subsystem DM, ignoring /dev/sdbk1
Found duplicate PV Lx5viH3geNo0zSO07sHcbNlJ9nVmcGPA: using /dev/mapper/MRC_arch_disk_001p1 not /dev/sdbl1
Using duplicate PV /dev/mapper/MRC_arch_disk_001p1 from subsystem DM, ignoring /dev/sdbl1
PV VG Fmt Attr PSize PFree
/dev/mapper/MR2_arch_disk_001p1 vgMR2oraarch lvm2 a–u 19.98g 0
/dev/mapper/MR2_data_disk_001p1 vgMR2data lvm2 a–u 299.98g 99.98g
/dev/mapper/MR2_disk_001p1 vgMR2 lvm2 a–u 99.98g 20.87g
/dev/mapper/MR2_log1_disk_001p1 vgMR2log1 lvm2 a–u 9.98g 7.98g
/dev/mapper/MR2_log2_disk_001p1 vgMR2log2 lvm2 a–u 9.98g 7.98g
/dev/mapper/MRC_arch_disk_001p1 vgMRCoraarch lvm2 a–u 19.98g 0
/dev/mapper/MRC_data_disk_001p1 vgMRCdata lvm2 a–u 299.98g 39.98g
/dev/mapper/MRC_disk_002 vgMRC lvm2 a–u 149.98g 31.88g
/dev/mapper/MRC_log1_disk_001p1 vgMRClog1 lvm2 a–u 9.98g 7.98g
/dev/mapper/MRC_log2_disk_001p1 vgMRClog2 lvm2 a–u 9.98g 7.98g
/dev/mapper/SCR_ARCH_disk_001p1 vgSCRarch lvm2 a–u 29.96g 4.57g
/dev/mapper/SCR_DATA_disk_001p1 vgSCRdata lvm2 a–u 109.96g 8.40g
/dev/mapper/SCR_LOG1_disk_001p1 vgSCRlog1 lvm2 a–u 4.96g 984.00m
/dev/mapper/SCR_LOG2_disk_001p1 vgSCRlog2 lvm2 a–u 4.96g 984.00m
/dev/mapper/SCR_disk_001p1 vgSCR lvm2 a–u 49.96g 3.96g
/dev/mapper/SCR_disk_002 vgSCR lvm2 a–u 49.98g 34.98g
/dev/mapper/SY1_arch_disk_001p1 vgSY1oraarch lvm2 a–u 149.98g 0
/dev/mapper/SY1_data_disk_002 vgSY1data lvm2 a–u 2.64t 0
/dev/mapper/SY1_disk_001p1 vgSY1 lvm2 a–u 49.98g 10.52g
/dev/mapper/SY1_disknew_001 vgSY1 lvm2 a–u 59.98g 44.98g
/dev/mapper/SY1_interf_disk_001 vgSY1interface lvm2 a–u 49.98g 1008.00m
/dev/mapper/SY1_log1_disk_001p1 vgSY1log1 lvm2 a–u 19.98g 9.98g
/dev/mapper/SY1_log2_disk_001p1 vgSY1log2 lvm2 a–u 19.98g 9.98g
/dev/mapper/sap_disk_001p1 vgSAPlocal lvm2 a–u 49.98g 4.79g
/dev/mapper/swap_disk_001p1 vgswap lvm2 a–u 223.98g 0
/dev/sda2 vgroot lvm2 a–u 278.86g 198.27g

Removing disk paths that are duplicate

echo 1 > /sys/block/sdbm/device/delete
echo 1 > /sys/block/sdbo/device/delete
echo 1 > /sys/block/sdbn/device/delete
echo 1 > /sys/block/sdbp/device/delete
echo 1 > /sys/block/sdbq/device/delete
echo 1 > /sys/block/sdbr/device/delete
echo 1 > /sys/block/sdbs/device/delete
echo 1 > /sys/block/sdbt/device/delete
echo 1 > /sys/block/sdbu/device/delete
echo 1 > /sys/block/sdbv/device/delete
echo 1 > /sys/block/sdbw/device/delete
echo 1 > /sys/block/sdbx/device/delete
echo 1 > /sys/block/sdbi/device/delete
echo 1 > /sys/block/sdby/device/delete
echo 1 > /sys/block/sdbj/device/delete
echo 1 > /sys/block/sdbz/device/delete

Checking status

root@linux:~ # pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/MR2_arch_disk_001p1 vgMR2oraarch lvm2 a–u 19.98g 0
/dev/mapper/MR2_data_disk_001p1 vgMR2data lvm2 a–u 299.98g 99.98g
/dev/mapper/MR2_disk_001p1 vgMR2 lvm2 a–u 99.98g 20.87g
/dev/mapper/MR2_log1_disk_001p1 vgMR2log1 lvm2 a–u 9.98g 7.98g
/dev/mapper/MR2_log2_disk_001p1 vgMR2log2 lvm2 a–u 9.98g 7.98g
/dev/mapper/MRC_arch_disk_001p1 vgMRCoraarch lvm2 a–u 19.98g 0
/dev/mapper/MRC_data_disk_001p1 vgMRCdata lvm2 a–u 299.98g 39.98g
/dev/mapper/MRC_disk_002 vgMRC lvm2 a–u 149.98g 31.88g
/dev/mapper/MRC_log1_disk_001p1 vgMRClog1 lvm2 a–u 9.98g 7.98g
/dev/mapper/MRC_log2_disk_001p1 vgMRClog2 lvm2 a–u 9.98g 7.98g
/dev/mapper/SCR_ARCH_disk_001p1 vgSCRarch lvm2 a–u 29.96g 4.57g
/dev/mapper/SCR_DATA_disk_001p1 vgSCRdata lvm2 a–u 109.96g 8.40g
/dev/mapper/SCR_LOG1_disk_001p1 vgSCRlog1 lvm2 a–u 4.96g 984.00m
/dev/mapper/SCR_LOG2_disk_001p1 vgSCRlog2 lvm2 a–u 4.96g 984.00m
/dev/mapper/SCR_disk_001p1 vgSCR lvm2 a–u 49.96g 3.96g
/dev/mapper/SCR_disk_002 vgSCR lvm2 a–u 49.98g 34.98g
/dev/mapper/SY1_arch_disk_001p1 vgSY1oraarch lvm2 a–u 149.98g 0
/dev/mapper/SY1_data_disk_002 vgSY1data lvm2 a–u 2.64t 0
/dev/mapper/SY1_disk_001p1 vgSY1 lvm2 a–u 49.98g 10.52g
/dev/mapper/SY1_disknew_001 vgSY1 lvm2 a–u 59.98g 44.98g
/dev/mapper/SY1_interf_disk_001 vgSY1interface lvm2 a–u 49.98g 1008.00m
/dev/mapper/SY1_log1_disk_001p1 vgSY1log1 lvm2 a–u 19.98g 9.98g
/dev/mapper/SY1_log2_disk_001p1 vgSY1log2 lvm2 a–u 19.98g 9.98g
/dev/mapper/sap_disk_001p1 vgSAPlocal lvm2 a–u 49.98g 4.79g
/dev/mapper/swap_disk_001p1 vgswap lvm2 a–u 223.98g 0
/dev/sda2 vgroot lvm2 a–u 278.86g 198.27g

Checking multipath devices. Some devices have only one path

root@linux:~ # multipath -ll
SCR_disk_001 (350002acb5a22374a) dm-26 3PARdata,VV
size=50G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:21 sdy 65:128 active ready running
`- 1:0:0:21 sdcb 68:240 active ready running
SY1_log2_disk_001 (350002ac17cd5374a) dm-6 3PARdata,VV
size=20G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:6 sdi 8:128 active ready running
MRC_arch_disk_001 (350002ac15504374a) dm-3 3PARdata,VV
size=20G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:3 sdf 8:80 active ready running
MR2_data_disk_001 (350002ac17cd8374a) dm-14 3PARdata,VV
size=300G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:13 sdp 8:240 active ready running
MR2_disk_001 (350002ac17cd6374a) dm-18 3PARdata,VV
size=100G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:15 sds 65:32 active ready running
SY1_log1_disk_001 (350002ac17cd4374a) dm-7 3PARdata,VV
size=20G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:7 sdj 8:144 active ready running
MR2_log2_disk_001 (350002ac17ce7374a) dm-11 3PARdata,VV
size=10G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:11 sdn 8:208 active ready running
MRC_disk_002 (350002ac2904a374a) dm-32 3PARdata,VV
size=150G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:23 sdaa 65:160 active ready running
`- 1:0:0:23 sdcd 69:16 active ready running
SCR_ARCH_disk_001 (350002acb5a2c374a) dm-16 3PARdata,VV
size=30G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:17 sdu 65:64 active ready running
`- 1:0:0:17 sdca 68:224 active ready running
MR2_log1_disk_001 (350002ac17ce5374a) dm-12 3PARdata,VV
size=10G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:12 sdo 8:224 active ready running
SY1_arch_disk_001 (350002ac17cca374a) dm-9 3PARdata,VV
size=150G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:9 sdl 8:176 active ready running
SY1_disknew_001 (350002ac4c566374a) dm-4 3PARdata,VV
size=60G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:4 sdg 8:96 active ready running
SCR_LOG2_disk_001 (350002acb5a39374a) dm-17 3PARdata,VV
size=5.0G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:18 sdv 65:80 active ready running
`- 1:0:0:18 sdq 65:0 active ready running
MR2_arch_disk_001 (350002ac17cd7374a) dm-15 3PARdata,VV
size=20G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:14 sdr 65:16 active ready running
SCR_LOG1_disk_001 (350002acb5a38374a) dm-19 3PARdata,VV
size=5.0G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:19 sdw 65:96 active ready running
`- 1:0:0:19 sdav 66:240 active ready running
MRC_data_disk_001 (350002ac15505374a) dm-2 3PARdata,VV
size=300G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:2 sde 8:64 active ready running
sap_disk_001 (350002ac1fc92374a) dm-13 3PARdata,VV
size=50G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:16 sdt 65:48 active ready running
SY1_disk_001 (350002ac17cbd374a) dm-10 3PARdata,VV
size=50G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:10 sdm 8:192 active ready running
SY1_interf_disk_001 (350002ac0ce50374a) dm-33 3PARdata,VV
size=50G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:24 sdab 65:176 active ready running
`- 1:0:0:24 sdce 69:32 active ready running
MRC_log2_disk_001 (350002ac1551a374a) dm-71 3PARdata,VV
size=10G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:0 sdc 8:32 active ready running
SY1_data_disk_002 (350002ac23826374a) dm-31 3PARdata,VV
size=2.6T features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:22 sdz 65:144 active ready running
`- 1:0:0:22 sdcc 69:0 active ready running
MRC_log1_disk_001 (350002ac15519374a) dm-72 3PARdata,VV
size=10G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:1 sdd 8:48 active ready running
SCR_DATA_disk_001 (350002acb5a37374a) dm-23 3PARdata,VV
size=110G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:20 sdx 65:112 active ready running
SCR_disk_002 (350002ac19e73374a) dm-8 3PARdata,VV
size=50G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:8 sdk 8:160 active ready running
swap_disk_001 (350002ac155d0374a) dm-5 3PARdata,VV
size=224G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
`- 0:0:0:5 sdh 8:112 active ready running

Recognize new LUNs and run multipath -v3

root@linux:~ # systool -av -c fc_host | grep “Class Device =” | awk -F’=’ {‘print $2’} | awk -F'”‘ {‘print “echo \”- – -\” > /sys/class/scsi_host/”$2″/scan”‘} | sh
root@linux:~ # multipath -v3

Paths restored

root@linux:~ # multipath -ll
SCR_disk_001 (350002acb5a22374a) dm-26 3PARdata,VV
size=50G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:21 sdy 65:128 active ready running
`- 1:0:0:21 sdcb 68:240 active ready running
SY1_log2_disk_001 (350002ac17cd5374a) dm-6 3PARdata,VV
size=20G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:6 sdi 8:128 active ready running
`- 1:0:0:6 sdai 66:32 active ready running
MRC_arch_disk_001 (350002ac15504374a) dm-3 3PARdata,VV
size=20G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:3 sdf 8:80 active ready running
`- 1:0:0:3 sdaf 65:240 active ready running
MR2_data_disk_001 (350002ac17cd8374a) dm-14 3PARdata,VV
size=300G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:13 sdp 8:240 active ready running
`- 1:0:0:13 sdap 66:144 active ready running
MR2_disk_001 (350002ac17cd6374a) dm-18 3PARdata,VV
size=100G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:15 sds 65:32 active ready running
`- 1:0:0:15 sdar 66:176 active ready running
SY1_log1_disk_001 (350002ac17cd4374a) dm-7 3PARdata,VV
size=20G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:7 sdj 8:144 active ready running
`- 1:0:0:7 sdaj 66:48 active ready running
MR2_log2_disk_001 (350002ac17ce7374a) dm-11 3PARdata,VV
size=10G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:11 sdn 8:208 active ready running
`- 1:0:0:11 sdan 66:112 active ready running
MRC_disk_002 (350002ac2904a374a) dm-32 3PARdata,VV
size=150G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:23 sdaa 65:160 active ready running
`- 1:0:0:23 sdcd 69:16 active ready running
SCR_ARCH_disk_001 (350002acb5a2c374a) dm-16 3PARdata,VV
size=30G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:17 sdu 65:64 active ready running
`- 1:0:0:17 sdca 68:224 active ready running
MR2_log1_disk_001 (350002ac17ce5374a) dm-12 3PARdata,VV
size=10G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:12 sdo 8:224 active ready running
`- 1:0:0:12 sdao 66:128 active ready running
SY1_arch_disk_001 (350002ac17cca374a) dm-9 3PARdata,VV
size=150G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:9 sdl 8:176 active ready running
`- 1:0:0:9 sdal 66:80 active ready running
SY1_disknew_001 (350002ac4c566374a) dm-4 3PARdata,VV
size=60G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:4 sdg 8:96 active ready running
`- 1:0:0:4 sdag 66:0 active ready running
SCR_LOG2_disk_001 (350002acb5a39374a) dm-17 3PARdata,VV
size=5.0G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:18 sdv 65:80 active ready running
`- 1:0:0:18 sdq 65:0 active ready running
MR2_arch_disk_001 (350002ac17cd7374a) dm-15 3PARdata,VV
size=20G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:14 sdr 65:16 active ready running
`- 1:0:0:14 sdaq 66:160 active ready running
SCR_LOG1_disk_001 (350002acb5a38374a) dm-19 3PARdata,VV
size=5.0G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:19 sdw 65:96 active ready running
`- 1:0:0:19 sdav 66:240 active ready running
MRC_data_disk_001 (350002ac15505374a) dm-2 3PARdata,VV
size=300G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:2 sde 8:64 active ready running
`- 1:0:0:2 sdae 65:224 active ready running
sap_disk_001 (350002ac1fc92374a) dm-13 3PARdata,VV
size=50G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:16 sdt 65:48 active ready running
`- 1:0:0:16 sdas 66:192 active ready running
SY1_disk_001 (350002ac17cbd374a) dm-10 3PARdata,VV
size=50G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:10 sdm 8:192 active ready running
`- 1:0:0:10 sdam 66:96 active ready running
SY1_interf_disk_001 (350002ac0ce50374a) dm-33 3PARdata,VV
size=50G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:24 sdab 65:176 active ready running
`- 1:0:0:24 sdce 69:32 active ready running
MRC_log2_disk_001 (350002ac1551a374a) dm-71 3PARdata,VV
size=10G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:0 sdc 8:32 active ready running
`- 1:0:0:0 sdac 65:192 active ready running
SY1_data_disk_002 (350002ac23826374a) dm-31 3PARdata,VV
size=2.6T features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:22 sdz 65:144 active ready running
`- 1:0:0:22 sdcc 69:0 active ready running
MRC_log1_disk_001 (350002ac15519374a) dm-72 3PARdata,VV
size=10G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:1 sdd 8:48 active ready running
`- 1:0:0:1 sdad 65:208 active ready running
SCR_DATA_disk_001 (350002acb5a37374a) dm-23 3PARdata,VV
size=110G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:20 sdx 65:112 active ready running
`- 1:0:0:20 sdat 66:208 active ready running
SCR_disk_002 (350002ac19e73374a) dm-8 3PARdata,VV
size=50G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:8 sdk 8:160 active ready running
`- 1:0:0:8 sdak 66:64 active ready running
swap_disk_001 (350002ac155d0374a) dm-5 3PARdata,VV
size=224G features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:5 sdh 8:112 active ready running
`- 1:0:0:5 sdah 66:16 active ready running

Problem solved

root@linux:~ # /var/opt/OV/bin/instrumentation/UXMONbroker -check mpmon
Fri Dec 15 11:39:25 2017 : INFO : UXMONmpmon is running now, pid=52191
mv: `/dev/null’ and `/dev/null’ are the same file
Fri Dec 15 11:39:26 2017 : INFO : UXMONmpmon end, pid=52191

One path missing in disk map on multipath device

Showing a particular case:

The disk mpath5 was only showing one path

root@linux:~ # multipath -ll mpath5
mpath5 (350002ac19430374a) dm-17 3PARdata,VV
[size=47G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 2:0:0:1 sdh 8:112 [active][ready]

The disk used by operating system is cciss/c0d0

root@linux:~ # pvs
PV VG Fmt Attr PSize PFree
/dev/cciss/c0d0p3 vg00 lvm2 a– 269.47G 203.28G
/dev/mpath/350002ac19429374a vgapp lvm2 a– 100.00G 0
/dev/mpath/350002ac1942c374a vgapp lvm2 a– 20.00G 0
/dev/mpath/350002ac1942e374a vgapp lvm2 a– 75.00G 0
/dev/mpath/350002ac1942f374a vgapp lvm2 a– 158.00G 0
/dev/mpath/350002ac19430374a vgapp lvm2 a– 47.00G 996.00M
/dev/mpath/350002ac22869374a vgapp lvm2 a– 100.00G 0
/dev/mpath/350002ac2286a374a vgapp lvm2 a– 40.00G 0

Listing the SCSI devices. sda through sdn are used

root@linux:~ # lsscsi
[1:0:0:1] disk 3PARdata VV 3213 /dev/sda
[1:0:0:2] disk 3PARdata VV 3213 /dev/sdb
[1:0:0:3] disk 3PARdata VV 3213 /dev/sdc
[1:0:0:4] disk 3PARdata VV 3213 /dev/sdd
[1:0:0:5] disk 3PARdata VV 3213 /dev/sde
[1:0:0:6] disk 3PARdata VV 3213 /dev/sdf
[1:0:0:7] disk 3PARdata VV 3213 /dev/sdg
[1:0:0:254] enclosu 3PARdata SES 3213 –
[2:0:0:1] disk 3PARdata VV 3213 /dev/sdh
[2:0:0:2] disk 3PARdata VV 3213 /dev/sdi
[2:0:0:3] disk 3PARdata VV 3213 /dev/sdj
[2:0:0:4] disk 3PARdata VV 3213 /dev/sdk
[2:0:0:5] disk 3PARdata VV 3213 /dev/sdl
[2:0:0:6] disk 3PARdata VV 3213 /dev/sdm
[2:0:0:7] disk 3PARdata VV 3213 /dev/sdn
[2:0:0:254] enclosu 3PARdata SES 3213 –

Checking /etc/multipath.conf. sda was being blacklisted. Commented the line

root@linux:~ # grep -v ^# /etc/multipath.conf

blacklist {
devnode “^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*”
devnode “^hd[a-z][[0-9]*]”
devnode “^hd[a-z]”
#devnode “^sda$”
}

defaults {
user_friendly_names yes
}

Running multipath -v3

root@linux:~ # multipath -v3

Checking disk mpath5

root@linux:~ # multipath -ll mpath5
mpath5 (350002ac19430374a) dm-17 3PARdata,VV
[size=47G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:1 sda 8:0 [active][ready]
\_ 2:0:0:1 sdh 8:112 [active][ready]

Linux LVM: Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc

On this server, any command that uses LVM returns an error message complaining about a missing disk

root@linux:~ # pvs
Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc.
WARNING: Inconsistent metadata found for VG oraclevg – updating to use version 24
PV VG Fmt Attr PSize PFree
/dev/mapper/crashvgp1 oraclevg lvm2 a–u 99.98g 99.98g
/dev/mapper/mpathbp1 oraclevg lvm2 a–u 299.96g 299.96g
/dev/mapper/oraclevg_1p1 oraclevg lvm2 a–u 99.98g 0
/dev/mapper/oraclevg_2p1 oraclevg lvm2 a–u 49.98g 0
/dev/sda2 rootvg lvm2 a–u 279.12g 143.62g
unknown device oraclevg lvm2 a-mu 49.98g 49.98g

Volume group oraclevg is showing duplicate

root@linux:~ # vgs -v
Using volume group(s) on command line.
Cache: Duplicate VG name oraclevg: Existing 5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT (created here) takes precedence over R8fkNM-1vrs-S4DF-reUZ-1pts-zhxk-EHVT1K
Archiving volume group “oraclevg” metadata (seqno 33).
Archiving volume group “oraclevg” metadata (seqno 3).
Creating volume group backup “/etc/lvm/backup/oraclevg” (seqno 3).
Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc.
Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc.
Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc.
Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc.
WARNING: Inconsistent metadata found for VG oraclevg – updating to use version 34
There are 1 physical volumes missing.
There are 1 physical volumes missing.
Archiving volume group “oraclevg” metadata (seqno 3).
Archiving volume group “oraclevg” metadata (seqno 35).
Creating volume group backup “/etc/lvm/backup/oraclevg” (seqno 35).
VG Attr Ext #PV #LV #SN VSize VFree VG UUID VProfile
oraclevg wz–n- 4.00m 2 2 0 149.96g 0 R8fkNM-1vrs-S4DF-reUZ-1pts-zhxk-EHVT1K
oraclevg wz-pn- 4.00m 3 0 0 449.93g 449.93g 5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT
rootvg wz–n- 4.00m 1 10 0 279.12g 143.62g 685XSf-7Dsf-76oL-5pp7-t27Z-nT1o-dqXuUB

To view the properties of a specific volume group use –select vg_uuid and inform the UUID gathered from the previous command

root@linux:~ # vgdisplay -v –select vg_uuid=5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT
Using volume group(s) on command line.
Cache: Duplicate VG name oraclevg: Existing 5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT (created here) takes precedence over R8fkNM-1vrs-S4DF-reUZ-1pts-zhxk-EHVT1K
Archiving volume group “oraclevg” metadata (seqno 53).
Archiving volume group “oraclevg” metadata (seqno 3).
Creating volume group backup “/etc/lvm/backup/oraclevg” (seqno 3).
Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc.
There are 1 physical volumes missing.
There are 1 physical volumes missing.
Archiving volume group “oraclevg” metadata (seqno 3).
Archiving volume group “oraclevg” metadata (seqno 53).
Creating volume group backup “/etc/lvm/backup/oraclevg” (seqno 53).
— Volume group —
VG Name oraclevg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 53
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 3
Act PV 2
VG Size 449.93 GiB
PE Size 4.00 MiB
Total PE 115181
Alloc PE / Size 0 / 0
Free PE / Size 115181 / 449.93 GiB
VG UUID 5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT

— Physical volumes —
PV Name /dev/mapper/crashvgp1
PV UUID Q8XgjC-wgao-uABU-6o39-9SVO-DSwE-zFcTSb
PV Status allocatable
Total PE / Free PE 25595 / 25595

PV Name unknown device
PV UUID unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc
PV Status allocatable
Total PE / Free PE 12795 / 12795

PV Name /dev/mapper/mpathbp1
PV UUID IMYMJx-H5xY-d16M-M63Q-1lHt-4oLN-xtzoeJ
PV Status allocatable
Total PE / Free PE 76791 / 76791

Many LVM command can be run with –select vg_uuid

root@linux:~ # vgchange -a n –select vg_uuid=5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT
WARNING: Inconsistent metadata found for VG oraclevg – updating to use version 54
Volume group “oraclevg” successfully changed
0 logical volume(s) in volume group “oraclevg” now active

I am removing oraclevg that is missing a physical volume and forcing the removal

root@linux:~ # vgremove –select vg_uuid=5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT -f
Volume group “oraclevg” successfully removed

Running vgs -v doesn’t show duplicate anymore

root@linux:~ # vgs -v
Using volume group(s) on command line.
Archiving volume group “oraclevg” metadata (seqno 3).
Creating volume group backup “/etc/lvm/backup/oraclevg” (seqno 3).
VG Attr Ext #PV #LV #SN VSize VFree VG UUID VProfile
oraclevg wz–n- 4.00m 2 2 0 149.96g 0 R8fkNM-1vrs-S4DF-reUZ-1pts-zhxk-EHVT1K
rootvg wz–n- 4.00m 1 10 0 279.12g 143.62g 685XSf-7Dsf-76oL-5pp7-t27Z-nT1o-dqXuUB

Multipathed disk showing error message: Checksum error

Running pvs shows the error message Checksum error and the disk is not part of a volume group

root@linux:~ # pvs | grep NP0_trans_disk_002
/dev/mapper/NP0_trans_disk_002: Checksum error
Couldn’t read volume group metadata.
/dev/mapper/NP0_trans_disk_002: Checksum error
Couldn’t read volume group metadata.
/dev/mapper/NP0_trans_disk_002: Checksum error
Couldn’t read volume group metadata.
/dev/mapper/NP0_trans_disk_002 lvm2 — 50.00g 50.00g

Check all the files inside directory /etc/lvm/backup if there is a file containing the disk that is showing Checksum error

root@linux:~ # cd /etc/lvm/backup
root@linux:~ # grep NP0_trans_disk_002 *
vgNP0trans: device = “/dev/mapper/NP0_trans_disk_002” # Hint only

Restore the volume group configuration

root@linux:~ # vgcfgrestore -f /etc/lvm/backup/vgNP0trans vgNP0trans
/dev/mapper/NP0_trans_disk_002: Checksum error
Couldn’t read volume group metadata.
Restored volume group vgNP0trans

The physical volume starts starts showing volume group information

root@linux:~ # pvs | grep NP0_trans_disk_002
/dev/mapper/NP0_trans_disk_002 vgNP0trans lvm2 a– 46.02g 8.00m

Disk removed on Linux and then LVM commands giving error: read failed after 0 of 4096 at 0: Input/output error

Some LUNs were removed from the server and now LVM commands give error messages

root@linux:~# pvs
/dev/mapper/350002ac4f691374a: read failed after 0 of 4096 at 0: Input/output error
/dev/mapper/350002ac4f691374a: read failed after 0 of 4096 at 107374116864: Input/output error
/dev/mapper/350002ac4f691374a: read failed after 0 of 4096 at 107374174208: Input/output error
/dev/mapper/350002ac4f691374a: read failed after 0 of 4096 at 4096: Input/output error
/dev/vgHP0ascs/lv_ASCS: read failed after 0 of 4096 at 0: Input/output error
/dev/vgHP0ascs/lv_ASCS: read failed after 0 of 4096 at 10737352704: Input/output error
/dev/vgHP0ascs/lv_ASCS: read failed after 0 of 4096 at 10737410048: Input/output error
/dev/vgHP0ascs/lv_ASCS: read failed after 0 of 4096 at 4096: Input/output error
/dev/vgHP0ascs/lv_NOVELL_RemoteLoader: read failed after 0 of 4096 at 0: Input/output error
/dev/vgHP0ascs/lv_NOVELL_RemoteLoader: read failed after 0 of 4096 at 117374976: Input/output error
/dev/vgHP0ascs/lv_NOVELL_RemoteLoader: read failed after 0 of 4096 at 117432320: Input/output error
/dev/vgHP0ascs/lv_NOVELL_RemoteLoader: read failed after 0 of 4096 at 4096: Input/output error
/dev/vgHP0ascs/lv_sapmnt_global: read failed after 0 of 4096 at 0: Input/output error
/dev/vgHP0ascs/lv_sapmnt_global: read failed after 0 of 4096 at 10737352704: Input/output error
/dev/vgHP0ascs/lv_sapmnt_global: read failed after 0 of 4096 at 10737410048: Input/output error
/dev/vgHP0ascs/lv_sapmnt_global: read failed after 0 of 4096 at 4096: Input/output error
/dev/vgHP0ascs/lv_sapmnt_profile: read failed after 0 of 4096 at 0: Input/output error
/dev/vgHP0ascs/lv_sapmnt_profile: read failed after 0 of 4096 at 2147418112: Input/output error
/dev/vgHP0ascs/lv_sapmnt_profile: read failed after 0 of 4096 at 2147475456: Input/output error
/dev/vgHP0ascs/lv_sapmnt_profile: read failed after 0 of 4096 at 4096: Input/output error
/dev/vgHP0ascs/lv_usrsapHP0: read failed after 0 of 4096 at 0: Input/output error
/dev/vgHP0ascs/lv_usrsapHP0: read failed after 0 of 4096 at 7516127232: Input/output error
/dev/vgHP0ascs/lv_usrsapHP0: read failed after 0 of 4096 at 7516184576: Input/output error
/dev/vgHP0ascs/lv_usrsapHP0: read failed after 0 of 4096 at 4096: Input/output error
/dev/vgHP0ascs/lv_sapmnt: read failed after 0 of 4096 at 0: Input/output error
/dev/vgHP0ascs/lv_sapmnt: read failed after 0 of 4096 at 20971454464: Input/output error
/dev/vgHP0ascs/lv_sapmnt: read failed after 0 of 4096 at 20971511808: Input/output error
/dev/vgHP0ascs/lv_sapmnt: read failed after 0 of 4096 at 4096: Input/output error
/dev/vgHP0ascs/lv_sapmnt_exe: read failed after 0 of 4096 at 0: Input/output error
/dev/vgHP0ascs/lv_sapmnt_exe: read failed after 0 of 4096 at 5368643584: Input/output error
/dev/vgHP0ascs/lv_sapmnt_exe: read failed after 0 of 4096 at 5368700928: Input/output error
/dev/vgHP0ascs/lv_sapmnt_exe: read failed after 0 of 4096 at 4096: Input/output error
PV VG Fmt Attr PSize PFree
/dev/sda2 vgroot lvm2 a– 99.50g 38.74g

Check with the command dmsetup the device that needs to be removed and then remove it. I do it from bottom to top

root@linux:~# dmsetup ls | grep lv_sapmnt_exe
vgHP0ascs-lv_sapmnt_exe (253:21)
root@linux:~# dmsetup remove vgHP0ascs-lv_sapmnt_exe

root@linux:~# dmsetup ls | grep lv_sapmnt
vgHP0ascs-lv_sapmnt_profile (253:18)
vgHP0ascs-lv_sapmnt_global (253:17)
vgHP0ascs-lv_sapmnt (253:20)
root@linux:~# dmsetup remove vgHP0ascs-lv_sapmnt

root@linux:~# dmsetup ls | grep lv_usrsapHP0
vgHP0ascs-lv_usrsapHP0 (253:19)
root@linux:~# dmsetup remove vgHP0ascs-lv_usrsapHP0

root@linux:~# dmsetup ls | grep lv_sapmnt_profile
vgHP0ascs-lv_sapmnt_profile (253:18)
root@linux:~# dmsetup remove vgHP0ascs-lv_sapmnt_profile

root@linux:~# dmsetup ls | grep lv_sapmnt_global
vgHP0ascs-lv_sapmnt_global (253:17)
root@linux:~# dmsetup remove vgHP0ascs-lv_sapmnt_global

root@linux:~# dmsetup ls | grep lv_NOVELL_RemoteLoader
vgHP0ascs-lv_NOVELL_RemoteLoader (253:16)
root@linux:~# dmsetup remove vgHP0ascs-lv_NOVELL_RemoteLoader

root@linux:~# dmsetup ls | grep lv_ASCS
vgHP0ascs-lv_ASCS (253:15)
root@linux:~# dmsetup remove vgHP0ascs-lv_ASCS

root@linux:~# dmsetup ls | grep 350002ac4f691374a
350002ac4f691374a (253:14)
root@linux:~# dmsetup remove 350002ac4f691374a

Then after issuing pvs, it won’t show error messages anymore

root@linux:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 vgroot lvm2 a– 99.50g 38.74g

Linux LVM: pvs showing dm-XX. How to map and show the friendly name (alias)

When listing the disks used by LVM and it shows only dm, you can make it display the multipath friendly name

root@linux:~ # pvs
PV VG Fmt Attr PSize PFree
/dev/cciss/c0d0p2 rootvg lvm2 a- 135.69G 11.69G
/dev/dm-0 softwarevg lvm2 a- 100.00G 30.00G
/dev/dm-1 softwarevg lvm2 a- 100.00G 0
/dev/dm-2 softwarevg lvm2 a- 100.00G 0
/dev/dm-33 softwarevg lvm2 a- 100.99G 100.99G
/dev/dm-6 bkpcvrdvg lvm2 a- 50.00G 0
/dev/dm-7 softwarevg lvm2 a- 100.00G 0
/dev/dm-8 softwarevg lvm2 a- 50.00G 0

Edit file /etc/lvm/lvm.conf and tell LVM to not use the cache file. Set write_cache_state to 0

root@linux:~ # vi /etc/lvm/lvm.conf
write_cache_state = 0

And delete the cache file

root@linux:~ # rm /etc/lvm/.cache
or
root@linux:~ # rm /etc/lvm/cache/.cache

It should be displaying with the friendly names

root@linux:~ # pvs
PV VG Fmt Attr PSize PFree
/dev/cciss/c0d0p2 rootvg lvm2 a- 135.69G 11.69G
/dev/mapper/bkpdisk01-part1 bkpcvrdvg lvm2 a- 50.00G 0
/dev/mapper/sfwdisk01-part1 softwarevg lvm2 a- 100.00G 0
/dev/mapper/sfwdisk02-part1 softwarevg lvm2 a- 50.00G 0
/dev/mapper/sfwdisk03 softwarevg lvm2 a- 100.00G 0
/dev/mapper/sfwdisk04 softwarevg lvm2 a- 100.00G 30.00G
/dev/mapper/sfwdisk05 softwarevg lvm2 a- 100.00G 0
/dev/mapper/sfwdisk05_NEW1 softwarevg lvm2 a- 100.99G 100.99G

Also check the filter parameter in the file /etc/lvm/lvm.conf

filter = [ “a|cciss/.*|” “a|/dev/mapper/.*|”, “a|/dev/sda.*|”, “r|/dev/sd.*|”, “r|/dev/dm-.*|” ]

If you need to map, go to /dev/mapper and do a long listing. Search for the number after the 253,. Eg. dm-33 is the sfwdisk05_NEW1

root@linux:/dev/mapper # ls -l total 0
brw——- 1 root root 253, 18 Dec 16 10:56 bkpcvrdvg-apliclv
brw——- 1 root root 253, 19 Dec 16 10:56 bkpcvrdvg-bkpcvrdlv
brw——- 1 root root 253, 3 Dec 16 10:56 bkpdisk01
brw——- 1 root root 253, 6 Dec 16 10:56 bkpdisk01-part1
lrwxrwxrwx 1 root root 16 Dec 16 10:56 control -> ../device-mapper
brw——- 1 root root 253, 28 Mar 13 05:45 mpathe
brw——- 1 root root 253, 29 Mar 13 05:45 mpathf
brw——- 1 root root 253, 30 Mar 13 05:45 mpathg
brw——- 1 root root 253, 31 Mar 13 05:45 mpathh
brw——- 1 root root 253, 32 Mar 13 05:45 mpathi
brw——- 1 root root 253, 9 Dec 16 10:57 rootvg-auditlv
brw——- 1 root root 253, 10 Dec 16 10:57 rootvg-locallv
brw——- 1 root root 253, 11 Dec 16 10:57 rootvg-optlv
brw——- 1 root root 253, 12 Dec 16 10:56 rootvg-rootlv
brw——- 1 root root 253, 13 Dec 16 10:56 rootvg-swaplv
brw——- 1 root root 253, 14 Dec 16 10:57 rootvg-tmplv
brw——- 1 root root 253, 15 Dec 16 10:58 rootvg-userslv
brw——- 1 root root 253, 16 Dec 16 10:58 rootvg-usrlv
brw——- 1 root root 253, 17 Dec 16 10:58 rootvg-varlv
brw——- 1 root root 253, 4 Dec 16 10:56 sfwdisk01
brw——- 1 root root 253, 7 Dec 16 10:56 sfwdisk01-part1
brw——- 1 root root 253, 5 Dec 16 10:56 sfwdisk02
brw——- 1 root root 253, 8 Dec 16 10:56 sfwdisk02-part1
brw——- 1 root root 253, 1 Dec 16 10:56 sfwdisk03
brw——- 1 root root 253, 0 Dec 16 10:56 sfwdisk04
brw——- 1 root root 253, 2 Dec 16 10:56 sfwdisk05
brw——- 1 root root 253, 27 Mar 13 09:24 sfwdisk05_NEW
brw——- 1 root root 253, 33 Mar 13 09:26 sfwdisk05_NEW1
brw——- 1 root root 253, 25 Dec 16 12:25 softwarevg-applv
brw——- 1 root root 253, 26 Dec 16 10:56 softwarevg-arqlv
brw——- 1 root root 253, 21 Dec 16 10:56 softwarevg-deploylv
brw——- 1 root root 253, 22 Dec 16 10:56 softwarevg-logslv
brw——- 1 root root 253, 24 Dec 16 10:56 softwarevg-oraclelv
brw——- 1 root root 253, 20 Dec 16 10:56 softwarevg-softwarelv
brw——- 1 root root 253, 23 Dec 16 10:56 softwarevg-transferlv