Tag: vgdisplay

Linux LVM: Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc

On this server, any command that uses LVM returns an error message complaining about a missing disk

root@linux:~ # pvs
Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc.
WARNING: Inconsistent metadata found for VG oraclevg – updating to use version 24
PV VG Fmt Attr PSize PFree
/dev/mapper/crashvgp1 oraclevg lvm2 a–u 99.98g 99.98g
/dev/mapper/mpathbp1 oraclevg lvm2 a–u 299.96g 299.96g
/dev/mapper/oraclevg_1p1 oraclevg lvm2 a–u 99.98g 0
/dev/mapper/oraclevg_2p1 oraclevg lvm2 a–u 49.98g 0
/dev/sda2 rootvg lvm2 a–u 279.12g 143.62g
unknown device oraclevg lvm2 a-mu 49.98g 49.98g

Volume group oraclevg is showing duplicate

root@linux:~ # vgs -v
Using volume group(s) on command line.
Cache: Duplicate VG name oraclevg: Existing 5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT (created here) takes precedence over R8fkNM-1vrs-S4DF-reUZ-1pts-zhxk-EHVT1K
Archiving volume group “oraclevg” metadata (seqno 33).
Archiving volume group “oraclevg” metadata (seqno 3).
Creating volume group backup “/etc/lvm/backup/oraclevg” (seqno 3).
Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc.
Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc.
Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc.
Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc.
WARNING: Inconsistent metadata found for VG oraclevg – updating to use version 34
There are 1 physical volumes missing.
There are 1 physical volumes missing.
Archiving volume group “oraclevg” metadata (seqno 3).
Archiving volume group “oraclevg” metadata (seqno 35).
Creating volume group backup “/etc/lvm/backup/oraclevg” (seqno 35).
VG Attr Ext #PV #LV #SN VSize VFree VG UUID VProfile
oraclevg wz–n- 4.00m 2 2 0 149.96g 0 R8fkNM-1vrs-S4DF-reUZ-1pts-zhxk-EHVT1K
oraclevg wz-pn- 4.00m 3 0 0 449.93g 449.93g 5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT
rootvg wz–n- 4.00m 1 10 0 279.12g 143.62g 685XSf-7Dsf-76oL-5pp7-t27Z-nT1o-dqXuUB

To view the properties of a specific volume group use –select vg_uuid and inform the UUID gathered from the previous command

root@linux:~ # vgdisplay -v –select vg_uuid=5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT
Using volume group(s) on command line.
Cache: Duplicate VG name oraclevg: Existing 5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT (created here) takes precedence over R8fkNM-1vrs-S4DF-reUZ-1pts-zhxk-EHVT1K
Archiving volume group “oraclevg” metadata (seqno 53).
Archiving volume group “oraclevg” metadata (seqno 3).
Creating volume group backup “/etc/lvm/backup/oraclevg” (seqno 3).
Couldn’t find device with uuid unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc.
There are 1 physical volumes missing.
There are 1 physical volumes missing.
Archiving volume group “oraclevg” metadata (seqno 3).
Archiving volume group “oraclevg” metadata (seqno 53).
Creating volume group backup “/etc/lvm/backup/oraclevg” (seqno 53).
— Volume group —
VG Name oraclevg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 53
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 3
Act PV 2
VG Size 449.93 GiB
PE Size 4.00 MiB
Total PE 115181
Alloc PE / Size 0 / 0
Free PE / Size 115181 / 449.93 GiB
VG UUID 5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT

— Physical volumes —
PV Name /dev/mapper/crashvgp1
PV UUID Q8XgjC-wgao-uABU-6o39-9SVO-DSwE-zFcTSb
PV Status allocatable
Total PE / Free PE 25595 / 25595

PV Name unknown device
PV UUID unHhGy-Fg3A-Y8wU-PrWh-hwWx-Ki0R-D6Qasc
PV Status allocatable
Total PE / Free PE 12795 / 12795

PV Name /dev/mapper/mpathbp1
PV UUID IMYMJx-H5xY-d16M-M63Q-1lHt-4oLN-xtzoeJ
PV Status allocatable
Total PE / Free PE 76791 / 76791

Many LVM command can be run with –select vg_uuid

root@linux:~ # vgchange -a n –select vg_uuid=5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT
WARNING: Inconsistent metadata found for VG oraclevg – updating to use version 54
Volume group “oraclevg” successfully changed
0 logical volume(s) in volume group “oraclevg” now active

I am removing oraclevg that is missing a physical volume and forcing the removal

root@linux:~ # vgremove –select vg_uuid=5Rxet9-eL9E-8hFU-8m98-pVLh-gZMD-e4vZBT -f
Volume group “oraclevg” successfully removed

Running vgs -v doesn’t show duplicate anymore

root@linux:~ # vgs -v
Using volume group(s) on command line.
Archiving volume group “oraclevg” metadata (seqno 3).
Creating volume group backup “/etc/lvm/backup/oraclevg” (seqno 3).
VG Attr Ext #PV #LV #SN VSize VFree VG UUID VProfile
oraclevg wz–n- 4.00m 2 2 0 149.96g 0 R8fkNM-1vrs-S4DF-reUZ-1pts-zhxk-EHVT1K
rootvg wz–n- 4.00m 1 10 0 279.12g 143.62g 685XSf-7Dsf-76oL-5pp7-t27Z-nT1o-dqXuUB

HP-UX vgdisplay: Cannot display volume group “/dev/vgWP0log1_stage”

Listing volume groups available in the server, it showed me a message where it can’t display some volume groups

root@hpux:~ # vgdisplay | grep Name
VG Name /dev/vg00
VG Name /dev/vg01
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgWP0log1_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgWP0log2_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgECPlog1_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgECPlog2_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgXPTO_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgSCP_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgSCPdat1_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgSCPjc_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgSCPlog1_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgSCPlog2_stage”.

This server had all the LUNs removed. So in this case I can remove the volume group from the system

root@hpux:~ # vgexport vgWP0log1_stage
vgexport: Volume group “vgWP0log1_stage” has been successfully removed.

HP-UX HPOM The module VOLMON has detected an inconsistence between the number of LV and the number of current LV. Please make some UX expert verify this inconsistence due there is a risk of data corruption

Node : hpux.setaoffice.com
Node Type : Itanium 64/32(HTTPS)
Severity : major
OM Server Time: 2015-12-01 23:50:35
Message : UXMON: The number of Open LV and Current LV is different for VG: /dev/vgEP0_bc.
Msg Group : OS
Application : volmon
Object : LV
Event Type : NONE
Instance Name : NONE
Instruction : The module VOLMON has detected an inconsistence between the number of LV and the number of current LV. Please make some UX expert verify this inconsistence due there is a risk of data corruption

Checking the volume group

root@hp-ux:~ # vgdisplay vgEP0_bc
— Volume groups —
VG Name /dev/vgEP0_bc
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 27
Open LV 26
Max PV 40
Cur PV 15
Act PV 11
Max PE per PV 50000
VGDA 22
PE Size (Mbytes) 16
Total PE 91253
Alloc PE 91253
Free PE 0
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 1.0
VG Max Size 31250g
VG Max Extents 2000000

Checking all logical volumes from the volume group vgEP0_bc

root@hp-ux:~ # vgdisplay -v vgEP0_bc | grep “LV Name” | awk ‘{print “lvdisplay -v “$3” | grep -v disk”‘}
lvdisplay -v /dev/vgEP0_bc/lv11202 | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvDVEBMGS00 | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvNOVELL_RemoteLoader | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvSCS01 | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvSYS | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvconfig | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvdesenvolvimentos | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvdrlocalfs | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvinterf | grep -v disk -> Problemas
lvdisplay -v /dev/vgEP0_bc/lvinterface | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvol26 | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvora10264 | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvora11204 | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvoraarch | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvoracle | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvoracli | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvput | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvreorg | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvsapmnt | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvsapmntWDP | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvstage | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvtrans | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvtransARCH | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvusrsapWDP | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvusrsapdaa | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvusrsaptmp | grep -v disk

Checking the logical volume with -v to display more verbose I checked that PE1 column is showing question marks. The disk needs to be replaced and this logical volume recreated

root@hp-ux:~ # lvdisplay -v /dev/vgEP0_bc/lvinterf | grep -v disk | more
— Logical volumes —
LV Name /dev/vgEP0_bc/lvinterf
VG Name /dev/vgEP0_bc
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 614400
Current LE 38400
Allocated PE 38400
Stripes 0
Stripe Size (Kbytes) 0
Bad block on
Allocation strict
IO Timeout (Seconds) default

— Distribution of logical volume —
PV Name LE on PV PE on PV

— Logical extents —
LE PV1 PE1 Status 1
08750 ??? 15952 current
08751 ??? 15953 current
08752 ??? 15954 current
08753 ??? 15955 current
08754 ??? 15956 current
08755 ??? 15957 current
08756 ??? 15958 current

HP-UX: Adding a new disk that is part of a Physical Volume Group (PVG)

root@hpux:~ # vgdisplay -v vgLP0data
— Volume groups —
VG Name /dev/vgLP0data
VG Write Access read/write
VG Status available, exclusive
Max LV 2047
Cur LV 7
Open LV 7
Cur Snapshot LV 0
Max PV 2048
Cur PV 7
Act PV 7
Max PE per PV 447994
VGDA 14
PE Size (Mbytes) 32
Unshare unit size (Kbytes) 1024
Total PE 447993
Alloc PE 447986
Current pre-allocated PE 0
Free PE 7
Total PVG 1
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 2.2
VG Max Size 14335819m
VG Max Extents 447994
Cur Snapshot Capacity 0p
Max Snapshot Capacity 14335819m

— Logical volumes —
LV Name /dev/vgLP0data/lvsapdata1
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata2
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata3
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata4
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata5
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata6
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata7
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

— Physical volumes —
PV Name /dev/disk/disk949
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk950
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk951
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk952
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk953
PV Status available
Total PE 63999
Free PE 2
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk954
PV Status available
Total PE 63999
Free PE 2
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk955
PV Status available
Total PE 63999
Free PE 3
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

— Physical volume groups —
PVG Name pvgLP0data
PV Name /dev/disk/disk949
PV Name /dev/disk/disk950
PV Name /dev/disk/disk951
PV Name /dev/disk/disk952
PV Name /dev/disk/disk953
PV Name /dev/disk/disk954
PV Name /dev/disk/disk955

Add the disk to LVM

root@hpux:~ # pvcreate /dev/rdisk/disk670
Physical volume “/dev/rdisk/disk670” has been successfully created.

Add the disk to volume group and add the flag -g to add to PVG section

root@hpux:~ # vgextend -g pvgLP0data vgLP0data /dev/disk/disk670
Volume group “vgLP0data” has been successfully extended.
Physical volume group “pvgLP0data” has been successfully extended.
Volume Group configuration for /dev/vgLP0data has been saved in /etc/lvmconf/vgLP0data.conf

HPOM – UXMON: The number of Open LV and Current LV is different for VG: rootvg

Node : linux.setaoffice.com
Node Type : Intel/AMD x64(HTTPS)
Severity : major
OM Server Time: 2014-08-15 15:46:22
Message : UXMON: The number of Open LV and Current LV is different for VG: rootvg
Msg Group : OS
Application : volmon
Object : LV
Event Type :
not_found

Instance Name :
not_found

Instruction : The module VOLMON has detected an inconsistence between the number of LV and the number
of current LV. Please make some UX expert verify this inconsistence due there is a risk
of data corruption

Verify the volume group described in the alarm

root@linux:~ # vgdisplay rootvg
— Volume group —
VG Name rootvg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 13
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 12
Open LV 11
Max PV 0
Cur PV 1
Act PV 1
VG Size 273.12 GB
PE Size 32.00 MB
Total PE 8740
Alloc PE / Size 6343 / 198.22 GB
Free PE / Size 2397 / 74.91 GB
VG UUID slQEGO-0Ly3-vfXf-1GaD-Kb1O-hoOa-4qkKyH

The number of Current Logical Volues is different from the number of Open Logical Volumes. Verify if there is an unmounted logical volume.

If you want to exclude this check, edit /var/opt/OV/conf/OpC/vol_mon.cfg

root@linux:~ # vi /var/opt/OV/conf/OpC/vol_mon.cfg
exclude_lv_no_check rootvg

Scanning for new disk in a VMware host running Suse Linux 10 SP4

Scanning for new disk in a VMware host running Suse 10 SP4

root@linux:~ # cat /etc/*release
SUSE Linux Enterprise Server 10 (x86_64)
VERSION = 10
PATCHLEVEL = 4
LSB_VERSION=”core-2.0-noarch:core-3.0-noarch:core-2.0-x86_64:core-3.0-x86_64″

I use the following command to scan the SCSI bus

root@linux:~ # echo “- – -” > /sys/class/scsi_host/host0/scan

List the disks. It’s the last one

root@linux:~ # fdisk -l

Then I make a partition for the disk. The first time, when there is not a valid DOS partition I like to write and call fdisk again

root@linux:~ # fdisk /dev/sdf
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won’t be recoverable.

The number of cylinders for this disk is set to 9137.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Running again and this time creating the partition

root@linux:~ # fdisk /dev/sdf

The number of cylinders for this disk is set to 9137.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-9137, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-9137, default 9137):
Using default value 9137

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Add the partition to LVM

root@linux:~ # pvcreate /dev/sdf1
Physical volume “/dev/sdf1” successfully created

Verify the file system size

root@linux:~ # df -h /usr/oradata/oradvt061t
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/softwarevg-dvt061lv
168G 133G 26G 84% /usr/oradata/oradvt061t

Add the disk to the volume group

root@linux:~ # vgextend softwarevg /dev/sdf1
Volume group “softwarevg” successfully extended

See the characteristics of the volume group

root@linux:~ # vgdisplay softwarevg
— Volume group —
VG Name softwarevg
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 11
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 4
Max PV 0
Cur PV 4
Act PV 4
VG Size 385.98 GB
PE Size 4.00 MB
Total PE 98810
Alloc PE / Size 69376 / 271.00 GB
Free PE / Size 29434 / 114.98 GB
VG UUID 0BsKwN-18al-1TT8-gctq-fRK4-ccJl-fNXL6g

And the disk added

root@dsv080:~ # pvdisplay /dev/sdf1
— Physical volume —
PV Name /dev/sdf1
VG Name softwarevg
PV Size 69.99 GB / not usable 793.00 KB
Allocatable yes
PE Size (KByte) 4096
Total PE 17918
Free PE 17918
Allocated PE 0
PV UUID 21Q3wb-2tKn-xUdP-9ZBa-tqbp-vhvS-MhSbVt

Increase the logical volume using the new disk

root@linux:~ # lvextend -l +17918 /dev/softwarevg/dvt061lv
Extending logical volume dvt061lv to 239.99 GB
Logical volume dvt061lv successfully resized

Then resize the file system

root@linux:~ # ext2online /dev/softwarevg/dvt061lv
ext2online v1.1.18 – 2001/03/18 for EXT2FS 0.5b

Verify the file system size

root@linux:~ # df -h /usr/oradata/oradvt061t
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/softwarevg-dvt061lv
237G 133G 92G 60% /usr/oradata/oradvt061t

HP-UX vgdisplay: /etc/lvmtab: No such file or directory

root@hp-ux:/ # vgdisplay
vgdisplay: /etc/lvmtab: No such file or directory
vgdisplay: No volume group name could be read from “/etc/lvmtab”.

To recreate the /etc/lvmtab, run vgscan -a to make it rescan all the multipathed physical volumes

root@hp-ux:/ # vgscan -a
Creating “/etc/lvmtab”.

Following Physical Volumes belong to one Volume Group.
Unable to match these Physical Volumes to a Volume Group.
Use the vgimport command to complete the process.
/dev/dsk/c77t0d1
/dev/dsk/c79t0d1
/dev/dsk/c81t0d1
/dev/dsk/c83t0d1
/dev/dsk/c85t0d1
/dev/dsk/c89t0d1
/dev/dsk/c91t0d1
/dev/dsk/c87t0d1

The Volume Group /dev/vg01 was not matched with any Physical Volumes.
The Volume Group /dev/vgomni was not matched with any Physical Volumes.
*** LVMTAB has been created successfully.
*** If PV links are configured in the system.
*** Do the following to resync information on disk.
*** #1. vgchange -a y
*** #2. lvlnboot -R

Then depending on your system you may need to run vgimport against some disks in your server