Category: HP-UX

HP-UX – /var is filling up and found /var/stm/logs/os

Filesystem /var is filling up and found under /var/stm/logs/os several files.

You need to keep the following files:
memlog
logXXXX.raw.cur
ccbootlog

I deleted several logXXXX.raw files to recover disk space

HP-UX vgdisplay: Cannot display volume group “/dev/vgWP0log1_stage”

Listing volume groups available in the server, it showed me a message where it can’t display some volume groups

root@hpux:~ # vgdisplay | grep Name
VG Name /dev/vg00
VG Name /dev/vg01
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgWP0log1_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgWP0log2_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgECPlog1_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgECPlog2_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgXPTO_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgSCP_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgSCPdat1_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgSCPjc_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgSCPlog1_stage”.
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group “/dev/vgSCPlog2_stage”.

This server had all the LUNs removed. So in this case I can remove the volume group from the system

root@hpux:~ # vgexport vgWP0log1_stage
vgexport: Volume group “vgWP0log1_stage” has been successfully removed.

HP-UX HPOM The module VOLMON has detected an inconsistence between the number of LV and the number of current LV. Please make some UX expert verify this inconsistence due there is a risk of data corruption

Node : hpux.setaoffice.com
Node Type : Itanium 64/32(HTTPS)
Severity : major
OM Server Time: 2015-12-01 23:50:35
Message : UXMON: The number of Open LV and Current LV is different for VG: /dev/vgEP0_bc.
Msg Group : OS
Application : volmon
Object : LV
Event Type : NONE
Instance Name : NONE
Instruction : The module VOLMON has detected an inconsistence between the number of LV and the number of current LV. Please make some UX expert verify this inconsistence due there is a risk of data corruption

Checking the volume group

root@hp-ux:~ # vgdisplay vgEP0_bc
— Volume groups —
VG Name /dev/vgEP0_bc
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 27
Open LV 26
Max PV 40
Cur PV 15
Act PV 11
Max PE per PV 50000
VGDA 22
PE Size (Mbytes) 16
Total PE 91253
Alloc PE 91253
Free PE 0
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 1.0
VG Max Size 31250g
VG Max Extents 2000000

Checking all logical volumes from the volume group vgEP0_bc

root@hp-ux:~ # vgdisplay -v vgEP0_bc | grep “LV Name” | awk ‘{print “lvdisplay -v “$3” | grep -v disk”‘}
lvdisplay -v /dev/vgEP0_bc/lv11202 | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvDVEBMGS00 | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvNOVELL_RemoteLoader | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvSCS01 | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvSYS | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvconfig | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvdesenvolvimentos | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvdrlocalfs | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvinterf | grep -v disk -> Problemas
lvdisplay -v /dev/vgEP0_bc/lvinterface | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvol26 | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvora10264 | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvora11204 | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvoraarch | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvoracle | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvoracli | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvput | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvreorg | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvsapmnt | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvsapmntWDP | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvstage | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvtrans | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvtransARCH | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvusrsapWDP | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvusrsapdaa | grep -v disk
lvdisplay -v /dev/vgEP0_bc/lvusrsaptmp | grep -v disk

Checking the logical volume with -v to display more verbose I checked that PE1 column is showing question marks. The disk needs to be replaced and this logical volume recreated

root@hp-ux:~ # lvdisplay -v /dev/vgEP0_bc/lvinterf | grep -v disk | more
— Logical volumes —
LV Name /dev/vgEP0_bc/lvinterf
VG Name /dev/vgEP0_bc
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 614400
Current LE 38400
Allocated PE 38400
Stripes 0
Stripe Size (Kbytes) 0
Bad block on
Allocation strict
IO Timeout (Seconds) default

— Distribution of logical volume —
PV Name LE on PV PE on PV

— Logical extents —
LE PV1 PE1 Status 1
08750 ??? 15952 current
08751 ??? 15953 current
08752 ??? 15954 current
08753 ??? 15955 current
08754 ??? 15956 current
08755 ??? 15957 current
08756 ??? 15958 current

HPUX: vmunix: Evpd inquiry page 83h/80h failed or the current page 83h/80h data do not match the previous known page 83h/80h data on LUN id 0x0 probed beneath the target path (class = tgtpath, instance = 27)

Looking at the logfile /var/adm/syslog/syslog.log I saw the messages

Mar 26 17:06:17 hpux vmunix: Evpd inquiry page 83h/80h failed or the current page 83h/80h data do not match the previous known page 83h/80h data on LUN id 0x0 probed beneath the target path (class = tgtpath, instance = 37) The lun path is (class = lunpath, instance 33).Run ‘scsimgr replace_wwid’ command to validate the change
Mar 26 17:06:47 hpux vmunix: Evpd inquiry page 83h/80h failed or the current page 83h/80h data do not match the previous known page 83h/80h data on LUN id 0x0 probed beneath the target path (class = tgtpath, instance = 27) The lun path is (class = lunpath, instance 25).Run ‘scsimgr replace_wwid’ command to validate the change

Running the command to validate the change

root@hpux:~ # scsimgr replace_wwid -C lunpath -I 27
scsimgr:WARNING: Performing replace_wwid on the resource may have some impact on system operation.
Do you really want to replace? (y/[n])? y
Binding of LUN path 0/2/0/0/0/0.0x5d8d385c1a8e4000.0x0 with new LUN validated successfully

root@hpux:~ # scsimgr -f replace_wwid -C lunpath -I 37
Binding of LUN path 0/4/0/0/0/1.0x5d8d385c1a8e4010.0x0 with new LUN validated successfully

HP-UX: UXMON:Critical multipath error detected. Please see /var/opt/OV/log/OpC/scsi_mon.log for details.

I received this problem where the disk was showing some multipath erros on an HP-UX server

UXMON:Critical multipath error detected. Please see /var/opt/OV/log/OpC/scsi_mon.log for details.

Node : hpux.setaoffice.com
Node Type : Itanium 64/32(HTTPS)
Severity : critical
OM Server Time: 2016-05-24 11:35:09
Message : UXMON:Critical multipath error detected. Please see /var/opt/OV/log/OpC/scsi_mon.log for details.
Msg Group : OS
Application : scsimon
Object : No
Event Type :
Instance Name :
Instruction : No

root@hpux:~ # cat /var/opt/OV/log/OpC/scsi_mon.log
Tue May 24 12:35:07 2016 : Critical /dev/rdisk/disk670 has failed lunpaths! Please check with scsimgr -p lun_map -D /dev/rdisk/disk670

root@hpux:~ # scsimgr -p lun_map -D /dev/rdisk/disk670
lunpath:647:38/0/0/2/0/0/0.0x50060e800574f200.0x405c000000000000:fibre_channel:FAILED:FAILED
lunpath:632:38/0/0/2/0/0/1.0x50060e800574f210.0x405c000000000000:fibre_channel:FAILED:FAILED
lunpath:617:36/0/0/2/0/0/0.0x50060e800574f200.0x405c000000000000:fibre_channel:FAILED:FAILED
lunpath:602:36/0/0/2/0/0/1.0x50060e800574f210.0x405c000000000000:fibre_channel:FAILED:FAILED
lunpath:33:36/0/0/2/0/0/0.0x50060e800574f200.0x40a5000000000000:fibre_channel:ACTIVE:ACTIVE
lunpath:35:36/0/0/2/0/0/1.0x50060e800574f210.0x40a5000000000000:fibre_channel:ACTIVE:ACTIVE
lunpath:37:38/0/0/2/0/0/0.0x50060e800574f200.0x40a5000000000000:fibre_channel:ACTIVE:ACTIVE
lunpath:39:38/0/0/2/0/0/1.0x50060e800574f210.0x40a5000000000000:fibre_channel:ACTIVE:ACTIVE

Removing invalid paths

root@hpux:~ # rmsf -H 38/0/0/2/0/0/0.0x50060e800574f200.0x405c000000000000
root@hpux:~ # rmsf -H 38/0/0/2/0/0/1.0x50060e800574f210.0x405c000000000000
root@hpux:~ # rmsf -H 36/0/0/2/0/0/0.0x50060e800574f200.0x405c000000000000
root@hpux:~ # rmsf -H 36/0/0/2/0/0/1.0x50060e800574f210.0x405c000000000000

Checking LUN paths

root@hpux:~ # scsimgr -p lun_map -D /dev/rdisk/disk670
lunpath:33:36/0/0/2/0/0/0.0x50060e800574f200.0x40a5000000000000:fibre_channel:ACTIVE:ACTIVE
lunpath:35:36/0/0/2/0/0/1.0x50060e800574f210.0x40a5000000000000:fibre_channel:ACTIVE:ACTIVE
lunpath:37:38/0/0/2/0/0/0.0x50060e800574f200.0x40a5000000000000:fibre_channel:ACTIVE:ACTIVE
lunpath:39:38/0/0/2/0/0/1.0x50060e800574f210.0x40a5000000000000:fibre_channel:ACTIVE:ACTIVE

Checking another disk

root@hpux:~ # scsimgr -p lun_map -D /dev/rdisk/disk31
lunpath:61:0/2/1/0.0x21230002ac001673.0x4001000000000000:fibre_channel: FAILED:AUTH_FAILED
lunpath:62:0/2/1/0.0x21230002ac001673.0x4002000000000000:fibre_channel:ACTIVE:ACTIVE
lunpath:71:0/5/1/0.0x20240002ac001673.0x4001000000000000:fibre_channel: FAILED:AUTH_FAILED
lunpath:72:0/5/1/0.0x20240002ac001673.0x4002000000000000:fibre_channel:ACTIVE:ACTIVE

root@hpux:~ # ioscan -m lun /dev/rdisk/disk31
Class I Lun H/W Path Driver S/W State H/W Type Health Description
=======================================================================
disk 31 64000/0xfa00/0x8c esdisk CLAIMED DEVICE limited 3PARdataVV
0/2/1/0.0x21230002ac001673.0x4001000000000000
0/2/1/0.0x21230002ac001673.0x4002000000000000
0/5/1/0.0x20240002ac001673.0x4001000000000000
0/5/1/0.0x20240002ac001673.0x4002000000000000
/dev/disk/disk31 /dev/rdisk/disk31

2 Failed LUN paths

root@hpux:~ # scsimgr get_info -D /dev/rdisk/disk31|more

STATUS INFORMATION FOR LUN : /dev/rdisk/disk31

Generic Status Information

SCSI services internal state = ONLINE
Device type = Direct_Access
EVPD page 0x83 description code = 1
EVPD page 0x83 description association = 0
EVPD page 0x83 description type = 3
World Wide Identifier (WWID) = 0x50002ac0031a1673
Serial number = ” 1405747″
Vendor id = “3PARdata”
Product id = “VV ”
Product revision = “3131”
Other properties = “”
SPC protocol revision = 6
Open count (includes chr/blk/pass-thru/class) = 1
Raw open count (includes class/pass-thru) = 0
Pass-thru opens = 0
LUN path count = 4
Active LUN paths = 2
Standby LUN paths = 0
Failed LUN paths = 2
Maximum I/O size allowed = 2097152
Preferred I/O size = 2097152
Outstanding I/Os = 0
I/O load balance policy = round_robin
Path fail threshold time period = 0
Transient time period = 120
Tracing buffer size = 1024
LUN Path used when policy is path_lockdown = NA
LUN access type = NA
Asymmetric logical unit access supported = No
Asymmetric states supported = NA
Preferred paths reported by device = No
Preferred LUN paths = 0

Driver esdisk Status Information :

Capacity in number of blocks = 213909504
Block size in bytes = 512
Number of active IOs = 0
Special properties =
Maximum number of IO retries = 45
IO transfer timeout in secs = 30
FORMAT command timeout in secs = 86400
START UNIT command timeout in secs = 60
Timeout in secs before starting failing IO = 120
IO infinite retries = false

Validating disk paths for disk31

root@hpux:~ # scsimgr -f replace_wwid -D /dev/rdisk/disk31
scsimgr: Successfully validated binding of LUN paths with new LUN.

The invalid paths were removed

root@hpux:~ # ioscan -m lun /dev/rdisk/disk31
Class I Lun H/W Path Driver S/W State H/W Type Health Description
======================================================================
disk 31 64000/0xfa00/0x8c esdisk CLAIMED DEVICE online 3PARdataVV
0/2/1/0.0x21230002ac001673.0x4002000000000000
0/5/1/0.0x20240002ac001673.0x4002000000000000
/dev/disk/disk31 /dev/rdisk/disk31

HP-UX: LVM – Failure possibly caused by PVG-Strict or Distributed allocation policies

Tried to create a logical volume and HP-uX gave an error message

root@hpux:~ # lvcreate -s g -D y -r N -L 400000 -n lvsapdata8 /dev/vgLP0data
Warning: The “-r” option has been ignored as it is not supported
for volume group version 2.0 or higher
Logical volume “/dev/vgLP0data/lvsapdata8” has been successfully created with
character device “/dev/vgLP0data/rlvsapdata8”.
lvcreate: Not enough free physical extents available.
Logical volume “/dev/vgLP0data/lvsapdata8” could not be extended.
Failure possibly caused by PVG-Strict or Distributed allocation policies.

The problem was the combination of strict allocation policy and distributed allocation. Since I had 7 disks in the volume group and a new disk was added, this logical volume creation was not meeting the options set

-s strict Set the strict allocation policy. Mirror copies
of a logical extent can be allocated to share or
not share the same physical volume or physical
volume group. strict can have one of the
following values:

y Set a strict allocation policy. Mirrors of
a logical extent cannot share the same
physical volume. This is the default.

g Set a PVG-strict allocation policy.
Mirrors of a logical extent cannot share
the same physical volume group. A PVG-
strict allocation policy cannot be set on a
logical volume in a volume group that does
not have a physical volume group defined.

n Do not set a strict or PVG-strict
allocation policy. Mirrors of a logical
extent can share the same physical volume.

-D distributed Set the distributed allocation policy.
distributed can have one of the following
values:

y Turn on distributed allocation.

n Turn off distributed allocation. This is
the default.

When the distributed allocation policy is turned
on, only one free extent is allocated from the
first available physical volume. The next free
extent is allocated from the next available
physical volume. Allocation of free extents
proceeds in round-robin order on the list of
available physical volumes.

When the distributed allocation policy is turned
off, all available free extents are allocated
from each available physical volume before
proceeding to the next available physical
volume. This is the default.

The distributed allocation policy REQUIRES the
PVG-strict allocation policy (-s g) to ensure
that mirrors of distributed extents do not
overlap (for maximum availability).

lvcreate(1M) will obtain the list of available
physical volumes from /etc/lvmpvg. See
vgextend(1M) for more information on physical
volume groups and /etc/lvmpvg.

When a logical volume with distributed extents
is mirrored, the resulting layout is commonly
referred to as EXTENT-BASED MIRRORED STRIPES.

Note that EXTENT-BASED MIRRORED STRIPES can be
created without the distributed allocation
policy by adding one extent at a time to the
desired physical volumes through lvextend(1M).

The distributed allocation policy is
incompatible with the striped scheduling policy
(-i stripes) and the contiguous allocation
policy (-C y).

The lvchange(1M) command can be used to assign
the distributed allocation policy to an existing
logical volume.

See lvdisplay(1M) for display values.

See EXAMPLES.

HP-UX: Disk added to Volume Group but showing different Total PE and Free PE than expected

After I added a disk to LVM

root@hpux:~ # pvcreate /dev/rdisk/disk670
Physical volume “/dev/rdisk/disk670” has been successfully created.

Disk added but it shows wrong total and free PE

root@hpux:~ # vgextend -g pvgLP0data vgLP0data /dev/disk/disk670
Volume group “vgLP0data” has been successfully extended.
Volume Group configuration for /dev/vgLP0data has been saved in /etc/lvmconf/vgLP0data.conf
vlunx045:/root# vgdisplay -v vgLP0data
— Volume groups —
VG Name /dev/vgLP0data
VG Write Access read/write
VG Status available, exclusive
Max LV 2047
Cur LV 8
Open LV 8
Cur Snapshot LV 0
Max PV 2048
Cur PV 8
Act PV 8
Max PE per PV 447994
VGDA 16
PE Size (Mbytes) 32
Unshare unit size (Kbytes) 1024
Total PE 447994
Alloc PE 447986
Current pre-allocated PE 0
Free PE 8
Total PVG 1
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 2.2
VG Max Size 14335819m
VG Max Extents 447994
Cur Snapshot Capacity 0p
Max Snapshot Capacity 14335819m

— Logical volumes —
LV Name /dev/vgLP0data/lvsapdata1
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata2
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata3
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata4
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata5
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata6
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata7
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata8
LV Status available/syncd
LV Size (Mbytes) 0
Current LE 0
Allocated PE 0
Used PV 0

— Physical volumes —
PV Name /dev/disk/disk949
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk950
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk951
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk952
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk953
PV Status available
Total PE 63999
Free PE 2
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk954
PV Status available
Total PE 63999
Free PE 2
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk955
PV Status available
Total PE 63999
Free PE 3
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk670
PV Status available
Total PE 1
Free PE 1
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

— Physical volume groups —
PVG Name pvgLP0data
PV Name /dev/disk/disk949
PV Name /dev/disk/disk950
PV Name /dev/disk/disk951
PV Name /dev/disk/disk952
PV Name /dev/disk/disk953
PV Name /dev/disk/disk954
PV Name /dev/disk/disk955
PV Name /dev/disk/disk670

We need to change VG Max Size. But first, let’s remove the disk from the volume group

root@hpux:~ # vgreduce vgLP0data /dev/disk/disk670
Physical volume “/dev/disk/disk670” has been successfully deleted from
physical volume group “pvgLP0data”.
Volume group “vgLP0data” has been successfully reduced.
Volume Group configuration for /dev/vgLP0data has been saved in /etc/lvmconf/vgLP0data.conf

Then run vgmodify to change the VG Max Size for the volume group. This volume group version is 2.2 so we use the following options:

root@hpux:~ # vgmodify -r -a -S 20t vgLP0data
Reconfiguration of physical volume “/dev/disk/disk949” for the
requested maximum volume group size 20971520 MB succeeded.
Previous number of extents: 63999
Number of extents after reconfiguration: 63999
Physical volume “/dev/disk/disk949” was changed.

Volume Group configuration for /dev/vgLP0data has been saved.

Reconfiguration of physical volume “/dev/disk/disk950” for the
requested maximum volume group size 20971520 MB succeeded.
Previous number of extents: 63999
Number of extents after reconfiguration: 63999
Physical volume “/dev/disk/disk950” was changed.

Volume Group configuration for /dev/vgLP0data has been saved.

Reconfiguration of physical volume “/dev/disk/disk951” for the
requested maximum volume group size 20971520 MB succeeded.
Previous number of extents: 63999
Number of extents after reconfiguration: 63999
Physical volume “/dev/disk/disk951” was changed.

Volume Group configuration for /dev/vgLP0data has been saved.

Reconfiguration of physical volume “/dev/disk/disk952” for the
requested maximum volume group size 20971520 MB succeeded.
Previous number of extents: 63999
Number of extents after reconfiguration: 63999
Physical volume “/dev/disk/disk952” was changed.

Volume Group configuration for /dev/vgLP0data has been saved.

Reconfiguration of physical volume “/dev/disk/disk953” for the
requested maximum volume group size 20971520 MB succeeded.
Previous number of extents: 63999
Number of extents after reconfiguration: 63999
Physical volume “/dev/disk/disk953” was changed.

Volume Group configuration for /dev/vgLP0data has been saved.

Reconfiguration of physical volume “/dev/disk/disk954” for the
requested maximum volume group size 20971520 MB succeeded.
Previous number of extents: 63999
Number of extents after reconfiguration: 63999
Physical volume “/dev/disk/disk954” was changed.

Volume Group configuration for /dev/vgLP0data has been saved.

Add the disk back and it will show the correct Total PE and Free PE

root@hpux:~ # vgextend -g pvgLP0data vgLP0data /dev/disk/disk670
Volume group “vgLP0data” has been successfully extended.
Physical volume group “pvgLP0data” has been successfully extended.
Volume Group configuration for /dev/vgLP0data has been saved in /etc/lvmconf/vgLP0data.conf

root@hpux:~ # vgdisplay -v vgLP0data
— Volume groups —
VG Name /dev/vgLP0data
VG Write Access read/write
VG Status available, exclusive
Max LV 2047
Cur LV 8
Open LV 8
Cur Snapshot LV 0
Max PV 2048
Cur PV 8
Act PV 8
Max PE per PV 524288
VGDA 16
PE Size (Mbytes) 32
Unshare unit size (Kbytes) 1024
Total PE 511992
Alloc PE 447986
Current pre-allocated PE 0
Free PE 64006
Total PVG 1
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 2.2
VG Max Size 20t
VG Max Extents 655360
Cur Snapshot Capacity 0p
Max Snapshot Capacity 20t

— Logical volumes —
LV Name /dev/vgLP0data/lvsapdata1
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata2
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata3
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata4
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata5
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata6
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata7
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata8
LV Status available/syncd
LV Size (Mbytes) 0
Current LE 0
Allocated PE 0
Used PV 0

— Physical volumes —
PV Name /dev/disk/disk949
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk950
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk951
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk952
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk953
PV Status available
Total PE 63999
Free PE 2
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk954
PV Status available
Total PE 63999
Free PE 2
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk955
PV Status available
Total PE 63999
Free PE 3
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk670
PV Status available
Total PE 63999
Free PE 63999
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

— Physical volume groups —
PVG Name pvgLP0data
PV Name /dev/disk/disk949
PV Name /dev/disk/disk950
PV Name /dev/disk/disk951
PV Name /dev/disk/disk952
PV Name /dev/disk/disk953
PV Name /dev/disk/disk954
PV Name /dev/disk/disk955
PV Name /dev/disk/disk670

HP-UX: Adding a new disk that is part of a Physical Volume Group (PVG)

root@hpux:~ # vgdisplay -v vgLP0data
— Volume groups —
VG Name /dev/vgLP0data
VG Write Access read/write
VG Status available, exclusive
Max LV 2047
Cur LV 7
Open LV 7
Cur Snapshot LV 0
Max PV 2048
Cur PV 7
Act PV 7
Max PE per PV 447994
VGDA 14
PE Size (Mbytes) 32
Unshare unit size (Kbytes) 1024
Total PE 447993
Alloc PE 447986
Current pre-allocated PE 0
Free PE 7
Total PVG 1
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 2.2
VG Max Size 14335819m
VG Max Extents 447994
Cur Snapshot Capacity 0p
Max Snapshot Capacity 14335819m

— Logical volumes —
LV Name /dev/vgLP0data/lvsapdata1
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata2
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata3
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata4
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata5
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata6
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

LV Name /dev/vgLP0data/lvsapdata7
LV Status available/syncd
LV Size (Mbytes) 2047936
Current LE 63998
Allocated PE 63998
Used PV 7

— Physical volumes —
PV Name /dev/disk/disk949
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk950
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk951
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk952
PV Status available
Total PE 63999
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk953
PV Status available
Total PE 63999
Free PE 2
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk954
PV Status available
Total PE 63999
Free PE 2
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk955
PV Status available
Total PE 63999
Free PE 3
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On

— Physical volume groups —
PVG Name pvgLP0data
PV Name /dev/disk/disk949
PV Name /dev/disk/disk950
PV Name /dev/disk/disk951
PV Name /dev/disk/disk952
PV Name /dev/disk/disk953
PV Name /dev/disk/disk954
PV Name /dev/disk/disk955

Add the disk to LVM

root@hpux:~ # pvcreate /dev/rdisk/disk670
Physical volume “/dev/rdisk/disk670” has been successfully created.

Add the disk to volume group and add the flag -g to add to PVG section

root@hpux:~ # vgextend -g pvgLP0data vgLP0data /dev/disk/disk670
Volume group “vgLP0data” has been successfully extended.
Physical volume group “pvgLP0data” has been successfully extended.
Volume Group configuration for /dev/vgLP0data has been saved in /etc/lvmconf/vgLP0data.conf

lvlnboot: Boot volume should be the first logical volume on the physical volume

If you run lvlnboot and try to set the boot volume but it gives you the error saying that the boot volume should be the first logical volume

root@hpux:~ # /usr/sbin/lvlnboot -b /dev/vg00/lvol1
lvlnboot: Boot volume should be the first logical volume on the physical volume

Check with pvdisplay and check which volume shows on the Physical Extents section. It must match with the logical volume you’re declaring above.

In this case, it should show /dev/vg00/lvol1 instead of /dev/vg00/lvermhome

root@hpux:~ # pvdisplay -v /dev/disk/disk31_p2 | more
— Physical volumes —
PV Name /dev/disk/disk31_p2
VG Name /dev/vg00
PV Status available
Allocatable yes
VGDA 2
Cur LV 11
PE Size (Mbytes) 32
Total PE 3171
Free PE 1323
Allocated PE 1848
Stale PE 0
IO Timeout (Seconds) default
Autoswitch On
Proactive Polling On

— Distribution of physical volume —
LV Name LE of LV PE for LV
/dev/vg00/lvol1 56 56
/dev/vg00/lvol2 256 256
/dev/vg00/lvol3 160 160
/dev/vg00/lvol4 64 64
/dev/vg00/lvol5 160 160
/dev/vg00/lvol6 64 64
/dev/vg00/lvol7 320 320
/dev/vg00/lvol8 160 160
/dev/vg00/lvol9 256 256
/dev/vg00/lvol10 320 320
/dev/vg00/lvermhome 32 32

— Physical extents —
PE Status LV LE
00000 current /dev/vg00/lvermhome 00000
00001 current /dev/vg00/lvermhome 00001
00002 current /dev/vg00/lvermhome 00002

I mirrored a new disk to replace the old boot disk. So that’s why it showed this message in this case.

UX:vxfs fsadm: ERROR: V-3-20279: /dev/volumegroup/logicalvolume is not the root inode of a vxfs file system

I ran fsadm on the device and it showed this error message

root@hpux:~ # fsadm -F vxfs -b 14336M /dev/vgSAPappls/lvusrsapWP0
UX:vxfs fsadm: ERROR: V-3-20279: /dev/vgSAPappls/lvusrsapWP0 is not the root inode of a vxfs file system

I needed to run in the mount point

root@hpux:~ # fsadm -F vxfs -b 14336M /usr/sap/WP0
UX:vxfs fsadm: INFO: V-3-25942: /dev/vgSAPappls/rlvusrsapWP0 size increased from 10485760 sectors to 14680064 sectors