Category: AIX

AIX – Removing disks from a volume group and then inserting a new disk

This activity they intend to remove a disk array drawer and so a new disk will be added and some old disks will be removed

Listing the disks

root@aix:/ # lspv
hdisk0 0003416af966647d rootvg active
hdisk1 0003416af740bb45 rootvg active
hdisk2 0003416a783c13a2 datavg active
hdisk3 0003416af7cc1c47 datavg active
hdisk4 0003416af7cc1cef datavg active
hdisk5 0003416af7cc1d8e datavg active
hdisk6 0003416af7cc1e47 datavg active
hdisk7 0003416a702e8968 datavg active
hdisk8 0003416af7cc1f97 datavg active
hdisk9 0003416afae71331 datavg active
hdisk10 0003416afae713de None
hdisk11 0003416afae7147e datavg active
hdisk12 0003416afae71520 datavg active
hdisk13 0003416afae715c5 datavg active
hdisk14 0003416afae7166c datavg active
hdisk15 0003416abe0699e3 datavg active
hdisk18 0003416a96438ea0 poolvg active
hdisk19 0003416a964390f3 poolvg active
hdisk20 0003416a96439327 poolvg active
hdisk21 0003416a964395d5 poolvg active
hdisk16 0003416a2f20c609 pool02vg active
hdisk17 0003416a2f20e88c pool03vg active

The command to find the newly inserted disk

root@aix:/ # cfgmgr

Listing the disk again. The new disk is the hdisk22

root@aix:/ # lspv
hdisk0 0003416af966647d rootvg active
hdisk1 0003416af740bb45 rootvg active
hdisk2 0003416a783c13a2 datavg active
hdisk3 0003416af7cc1c47 datavg active
hdisk4 0003416af7cc1cef datavg active
hdisk5 0003416af7cc1d8e datavg active
hdisk6 0003416af7cc1e47 datavg active
hdisk7 0003416a702e8968 datavg active
hdisk8 0003416af7cc1f97 datavg active
hdisk9 0003416afae71331 datavg active
hdisk10 0003416afae713de None
hdisk11 0003416afae7147e datavg active
hdisk12 0003416afae71520 datavg active
hdisk13 0003416afae715c5 datavg active
hdisk14 0003416afae7166c datavg active
hdisk15 0003416abe0699e3 datavg active
hdisk18 0003416a96438ea0 poolvg active
hdisk19 0003416a964390f3 poolvg active
hdisk20 0003416a96439327 poolvg active
hdisk21 0003416a964395d5 poolvg active
hdisk16 0003416a2f20c609 pool02vg active
hdisk17 0003416a2f20e88c pool03vg active
hdisk22 none None

Checking if the hdisk22 is coming from the disk array

root@aix:/ # lsdev -Cc disk
hdisk0 Available 1Z-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1Z-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1Z-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1Z-08-00-11,0 16 Bit LVD SCSI Disk Drive
hdisk4 Available 25-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk5 Available 25-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk6 Available 25-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk7 Available 25-08-00-11,0 16 Bit LVD SCSI Disk Drive
hdisk8 Available 25-08-00-12,0 16 Bit LVD SCSI Disk Drive
hdisk9 Available 25-08-00-13,0 16 Bit LVD SCSI Disk Drive
hdisk10 Available 25-09-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk11 Available 25-09-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk12 Available 25-09-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk13 Available 25-09-00-11,0 16 Bit LVD SCSI Disk Drive
hdisk14 Available 25-09-00-12,0 16 Bit LVD SCSI Disk Drive
hdisk15 Available 25-09-00-13,0 16 Bit LVD SCSI Disk Drive
hdisk16 Available 1f-08-02 1722-600 (600) Disk Array Device
hdisk17 Available 2a-08-02 1722-600 (600) Disk Array Device
hdisk18 Available 2a-08-02 1722-600 (600) Disk Array Device
hdisk19 Available 2a-08-02 1722-600 (600) Disk Array Device
hdisk20 Available 2a-08-02 1722-600 (600) Disk Array Device
hdisk21 Available 2a-08-02 1722-600 (600) Disk Array Device
hdisk22 Available 1f-08-02 1722-600 (600) Disk Array Device

Adding hdisk22 to the volume group poolvg

root@aix:/ # extendvg poolvg hdisk22
0516-1254 extendvg: Changing the PVID in the ODM.

Listing the disks to check if hdisk22 is in the poolvg

root@aix:/ # lspv | grep hdisk22
hdisk22 0003416a16888bf6 poolvg active

Listing the logical volumes from the volume group poolvg

root@aix:/ # lsvg -l poolvg
poolvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
pool_lv001 raw 128 128 4 open/syncd N/A
pool_lv002 raw 128 128 4 open/syncd N/A
pool_lv003 raw 128 128 4 open/syncd N/A
pool_lv004 raw 128 128 4 open/syncd N/A
pool_lv005 raw 128 128 4 open/syncd N/A
pool_lv006 raw 128 128 4 open/syncd N/A
pool_lv007 raw 128 128 4 open/syncd N/A
pool_lv008 raw 128 128 4 open/syncd N/A
pool_lv009 raw 128 128 4 open/syncd N/A
pool_lv010 raw 128 128 4 open/syncd N/A
pool_lv011 raw 128 128 4 open/syncd N/A
pool_lv012 raw 128 128 4 open/syncd N/A

Checking the characteristics from hdisk22

root@aix:/ # lspv hdisk22
PHYSICAL VOLUME: hdisk22 VOLUME GROUP: hsm_pool_vg
PV IDENTIFIER: 0003416a16888bf6 VG IDENTIFIER 0003416a00004c00000001129644220a
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 256 megabyte(s) LOGICAL VOLUMES: 0
TOTAL PPs: 4462 (1142272 megabytes) VG DESCRIPTORS: 1
FREE PPs: 4462 (1142272 megabytes) HOT SPARE: no
USED PPs: 0 (0 megabytes) MAX REQUEST: 1 megabyte
FREE DISTRIBUTION: 893..892..892..892..893
USED DISTRIBUTION: 00..00..00..00..00

Removing the logical volumes. Repeat with all the logical volumes

root@aix:/ # rmlv -f hsm_pool_lv003
rmlv: Logical volume pool_lv003 is removed.

Removing the disks hdisk18, hdisk19, hdisk20 and hdisk21 from the volume group poolvg

root@aix:/ # reducevg poolvg hdisk18
root@aix:/ # reducevg poolvg hdisk19
root@aix:/ # reducevg poolvg hdisk20
root@aix:/ # reducevg poolvg hdisk21

Checking if the disks were removed

root@aix:/ # lspv
hdisk0 0003416af966647d rootvg active
hdisk1 0003416af740bb45 rootvg active
hdisk2 0003416a783c13a2 datavg active
hdisk3 0003416af7cc1c47 datavg active
hdisk4 0003416af7cc1cef datavg active
hdisk5 0003416af7cc1d8e datavg active
hdisk6 0003416af7cc1e47 datavg active
hdisk7 0003416a702e8968 datavg active
hdisk8 0003416af7cc1f97 datavg active
hdisk9 0003416afae71331 datavg active
hdisk10 0003416afae713de None
hdisk11 0003416afae7147e datavg active
hdisk12 0003416afae71520 datavg active
hdisk13 0003416afae715c5 datavg active
hdisk14 0003416afae7166c datavg active
hdisk15 0003416abe0699e3 datavg active
hdisk18 0003416a96438ea0 None
hdisk19 0003416a964390f3 None
hdisk20 0003416a96439327 None
hdisk21 0003416a964395d5 None
hdisk16 0003416a2f20c609 pool02vg active
hdisk17 0003416a2f20e88c pool03vg active
hdisk22 0003416a16888bf6 poolvg active

Removing the disk definition from the system

root@aix:/ # rmdev -dl hdisk18
hdisk18 deleted
root@aix:/ # rmdev -dl hdisk19
hdisk19 deleted
root@aix:/ # rmdev -dl hdisk20
hdisk20 deleted
root@aix:/ # rmdev -dl hdisk21
hdisk21 deleted

Checking if the disks are ready to be physically removed

root@aix:/ # lspv
hdisk0 0003416af966647d rootvg active
hdisk1 0003416af740bb45 rootvg active
hdisk2 0003416a783c13a2 datavg active
hdisk3 0003416af7cc1c47 datavg active
hdisk4 0003416af7cc1cef datavg active
hdisk5 0003416af7cc1d8e datavg active
hdisk6 0003416af7cc1e47 datavg active
hdisk7 0003416a702e8968 datavg active
hdisk8 0003416af7cc1f97 datavg active
hdisk9 0003416afae71331 datavg active
hdisk10 0003416afae713de None
hdisk11 0003416afae7147e datavg active
hdisk12 0003416afae71520 datavg active
hdisk13 0003416afae715c5 datavg active
hdisk14 0003416afae7166c datavg active
hdisk15 0003416abe0699e3 datavg active
hdisk16 0003416a2f20c609 pool02vg active
hdisk17 0003416a2f20e88c pool03vg active
hdisk22 0003416a16888bf6 poolvg active

The disks don’t appear anymore

root@aix:/ # lspv | egrep ‘hdisk18|hdisk19|hdisk20|hdisk21’
root@aix:/ #

Creating the logical volumes in the volume group to replace the old ones deleted

mklv -t raw -y pool_lv001 poolvg 128
mklv -t raw -y pool_lv002 poolvg 128
mklv -t raw -y pool_lv003 poolvg 128
mklv -t raw -y pool_lv004 poolvg 128
mklv -t raw -y pool_lv005 poolvg 128
mklv -t raw -y pool_lv006 poolvg 128
mklv -t raw -y pool_lv007 poolvg 128
mklv -t raw -y pool_lv008 poolvg 128
mklv -t raw -y pool_lv009 poolvg 128
mklv -t raw -y pool_lv010 poolvg 128
mklv -t raw -y pool_lv011 poolvg 128
mklv -t raw -y pool_lv012 poolvg 128
mklv -t raw -y pool_lv013 poolvg 128
mklv -t raw -y pool_lv014 poolvg 128
mklv -t raw -y pool_lv015 poolvg 128
mklv -t raw -y pool_lv016 poolvg 128
mklv -t raw -y pool_lv017 poolvg 128
mklv -t raw -y pool_lv018 poolvg 128
mklv -t raw -y pool_lv019 poolvg 128
mklv -t raw -y pool_lv020 poolvg 128
mklv -t raw -y pool_lv021 poolvg 128
mklv -t raw -y pool_lv022 poolvg 128
mklv -t raw -y pool_lv023 poolvg 128
mklv -t raw -y pool_lv024 poolvg 128
mklv -t raw -y pool_lv025 poolvg 128
mklv -t raw -y pool_lv026 poolvg 128
mklv -t raw -y pool_lv027 poolvg 128
mklv -t raw -y pool_lv028 poolvg 128
mklv -t raw -y pool_lv029 poolvg 128
mklv -t raw -y pool_lv030 poolvg 128
mklv -t raw -y pool_lv031 poolvg 128
mklv -t raw -y pool_lv032 poolvg 128
mklv -t raw -y pool_lv033 poolvg 128
mklv -t raw -y pool_lv034 poolvg 128

Verifying the volume group poolvg

root@aix:/ # lsvg poolvg
VOLUME GROUP: poolvg VG IDENTIFIER: 0003416a00004c00000001129644220a
VG STATE: active PP SIZE: 256 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 4462 (1142272 megabytes)
MAX LVs: 256 FREE PPs: 110 (28160 megabytes)
LVs: 34 USED PPs: 4352 (1114112 megabytes)
OPEN LVs: 0 QUORUM: 2
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per VG: 30480
MAX PPs per PV: 5080 MAX PVs: 6
LTG size (Dynamic): 1024 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable

How to measure processor clock speed in AIX

To know the clock frequency of an IBM Power processor in AIX run pmcycles

root@aix:/ # pmcycles
This machine runs at 1200 MHz

To know the individual frequency of each processor, run pmcycles -m

root@aix:/ # pmcycles -m
CPU 0 runs at 1200 MHz
CPU 1 runs at 1200 MHz
CPU 2 runs at 1200 MHz
CPU 3 runs at 1200 MHz

AIX wall message: WARNING!!! The system is now operating with a power problem. This message will be walled every 12 hours. Remove this crontab entry after the problem is resolved

An AIX system displayed this message

Broadcast message from root@aix (tty) at 12:00:01 … rc.powerfail:2::WARNING!!! The system is now operating with a power problem. This message will be walled every 12 hours. Remove this crontab entry after the problem is resolved.

This message is called from cron at 12:00PM everyday

root@aix:/ # crontab -l
0 00,12 * * * wall%rc.powerfail:2::WARNING!!! The system is now operating with a power problem. This message will be walled every 12 hours. Remove this crontab entry after the problem is resolved.

I did this to check if there is still a problem

root@aix:/ # sh /etc/rc.powerfail > /dev/console 2>&1

Broadcast message from root@aix (pts/1) at 12:18:49 … rc.powerfail: init has received a SIGPWR signal. The system currently running under normal power conditions. Execute rc.powerfail -h as the root user for more information. ^G^G^G^G

According to the response shown above. My system is running under normal power conditions. Here is the help for the script

root@aix:/ # sh /etc/rc.powerfail -h
rc.powerfail:
This command is used to handle power problems with the system.
There are several different states that the system can be in when
the signal SIGPWR is received by init. The action taken will be
determined by the value of the power status. The following table
shows the values of the power status and action taken.
Power
Status Indication
—— —————————————————————-
0 System is running normally, there is no action taken.
1 A non-critical cooling problem exists.
2 A non-critical power problem exists.
3 System facing a critical condition. Will start shutdown in 10 minutes.
4 System facing a severe condition. Will be halted in the next 20 seconds.
255 ERROR with the machstat command, system shutdown starts immediately. ^G^G^G^G

Other codes

root@aix:/ # sh /etc/rc.powerfail > /dev/console 2>&1

Broadcast message from root@aix (pts/1) at 14:58:56 …

rc.powerfail: init has received a SIGPWR signal.
The system is now operating with a non-critical power problem.
Execute rc.powerfail -h as the root user for more information. ^G^G^G^G

Creating and removing a logical volume in AIX

I had several logical volumes that were presenting a weird error

root@aix:/ # rmlvcopy lvpooldom01001 1
0516-622 rmlvcopy: Warning, cannot write lv control block data.

I tried to migrate the logical volume off the disk

root@cvrdalebk01:/ # migratepv -l lvpooldom01001 hdisk3
0516-076 lmigratelv: Cannot remove last good copy of stale partition.
Resynchronize the partitions with syncvg and try again.
0516-812 migratepv: Warning, migratepv did not completely succeed;
all physical partitions have not been moved off the PV.

But it was not completed successfully

root@aix:/ # lsvg -l tsm01vg
tsm01vg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
lvtsm jfs2 32 32 1 open/syncd /tsm
loglv00 jfs2log 1 1 1 open/syncd N/A
lvtsmdborig01 jfs 512 512 1 open/syncd N/A
lvtsmdborig02 jfs 512 512 1 open/syncd N/A
lvtsmlogorig01 jfs 256 256 1 open/syncd N/A
lvtsmdbmirr01 jfs 512 512 1 open/syncd N/A
lvtsmdbmirr02 jfs 512 512 1 open/syncd N/A
lvtsmlogmirr01 jfs 256 256 1 open/syncd N/A
0516-1147 : Warning – logical volume lvpooldom01001 may be partially mirrored.
lvpooldom01001 raw 512 666 11 closed/stale N/A

So I ended up removing the logical volumes

root@aix:/ # rmlv lvpooldom01001
Warning, all data contained on logical volume lvpooldom01001 will be destroyed.
rmlv: Do you wish to continue? y(es) n(o)? y
rmlv: Logical volume lvpooldom01001 is removed.
root@aix:/ # rmlv -f lvpooldom01003
rmlv: Logical volume lvpooldom01003 is removed.
root@aix:/ # rmlv -f lvpooldom01004
rmlv: Logical volume lvpooldom01004 is removed.
root@aix:/ # rmlv -f lvpooldom01005
rmlv: Logical volume lvpooldom01005 is removed.

And then recreating them

root@aix:/ # mklv -t raw -y lvpooldom01001 tsm01vg 512
lvpooldom01001

Here is the status. No warning was displayed

root@aix:/ # lsvg -l tsm01vg | grep lvpooldom01001
lvpooldom01001 raw 512 512 1 closed/syncd N/A

AIX errpt error message: 0315-180 logread: UNEXPECTED EOF

If you see this error message

root@aix:/ # errpt -a
0315-180 logread: UNEXPECTED EOF
0315-171 Unable to process the error log file /var/adm/ras/errlog.
0315-132 The supplied error log is not valid: /var/adm/ras/errlog.

And the log file is zeroed

root@aix:/ # ls -l /var/adm/ras/errlog
-rw-rw-r– 1 root system 0 Mar 14 09:38 /var/adm/ras/errlog

root@aix:/ # errclear 0
0315-180 logread: UNEXPECTED EOF
0315-171 Unable to process the error log file /var/adm/ras/errlog.
0315-132 The supplied error log is not valid: /var/adm/ras/errlog.

Stop the error logging, remove the file and then restart it again

root@aix:/ # /usr/lib/errstop
root@aix:/ # rm /var/adm/ras/errlog
root@aix:/ # /usr/lib/errdemon

root@aix:/ # ls -l /var/adm/ras/errlog
-rw-rw-r– 1 root system 123404 Jul 11 14:33 /var/adm/ras/errlog

Checking I/O access in a specific disk in AIX

Run iostat and check the column % tm_act. For seven times, it will display the statistics every two seconds

root@aix5:/ # iostat -d hdisk2 2 7
System configuration: lcpu=6 disk=4
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk2          85.2     3453.1     303.8   37734456   8731944
hdisk2          51.0     610.0      84.5        796       424
hdisk2          34.5     288.0      45.5        436       140
hdisk2          39.5     452.0      63.5        692       212
hdisk2          61.5     380.0      66.0        480       280
hdisk2          48.5     542.0      66.5        596       488
hdisk2          47.0     590.0      80.0       1020       160

Stopping and starting an AIX Subsystem

Lists all the subsystems on AIX’s System Resource Controller and then look for the subsystem that you want. In this example, I’ll restart sshd

root@aix:/ # lssrc -a | grep ssh
sshd ssh 340158 active

Issue the command to stop sshd

root@aix:/ # stopsrc -s sshd
0513-044 The sshd Subsystem was requested to stop.

Then start it

root@aix:/ # startsrc -s sshd
0513-059 The sshd Subsystem has been started. Subsystem PID is 340162.

Increasing a JFS2 filesystem on AIX

First you need to know which filesystem that you’ll resize. Get the logical volume

root@aix:/ # df -m /u04
Filesystem MB blocks Free %Used Iused %Iused Mounted on
/dev/lvcrnstore 34816.00 1892.34 95% 103 1% /u04

With this information you type lslv lvcrnstore to find out about the volume group that this logical group is part of. Check if there are FREE PPs to extend the filesystem

root@aix:/ # lsvg oraclevg
VOLUME GROUP: oraclevg VG IDENTIFIER: 000d400c00004c00000000fd81379ca3
VG STATE: active PP SIZE: 256 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 531 (135936 megabytes)
MAX LVs: 256 FREE PPs: 17 (4352 megabytes)
LVs: 15 USED PPs: 514 (131584 megabytes)
OPEN LVs: 14 QUORUM: 2
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per PV: 1016 MAX PVs: 32
LTG size: 128 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable

Resize the filesystem and check the new size

root@aix:/ # chfs -a size=+4G /u04
Filesystem size changed to 79691776

root@aix:/ # df -m /u04
Filesystem MB blocks Free %Used Iused %Iused Mounted on
/dev/lvcrnstore 38912.00 5987.71 85% 103 1% /u04

Notice that the number of free PPs decreased since you used to increase the filesystem

root@aix:/ # lsvg oraclevg
VOLUME GROUP: oraclevg VG IDENTIFIER: 000d400c00004c00000000fd81379ca3
VG STATE: active PP SIZE: 256 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 531 (135936 megabytes)
MAX LVs: 256 FREE PPs: 1 (256 megabytes)
LVs: 15 USED PPs: 530 (135680 megabytes)
OPEN LVs: 14 QUORUM: 2
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per PV: 1016 MAX PVs: 32
LTG size: 128 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable

End of life information about HP-UX, Solaris, AIX and Linux

If you need to know the if a release of an Unix operating system is still supported by the vendor, check these links for information

End of life information about HP-UX (PDF)

End of life information about Solaris

End of life information about AIX

End of life information about Red Hat Enterprise Linux

End of life information about Suse Linux Enterprise

Problem with HMC – rebooting

hscroot@localhost:~> vtmenu

Retrieving name of managed system(s) . . . 10D400C

———————————————————-
Partitions On Managed System: 10D400C
———————————————————-
1) LPAR1 Not Available:
2) LPAR2 Not Available:

Enter Number of Running Partition (q to quit): q

Bye.

The server with the two LPAR partitions were shut down due to a electric maintenance. I tried to start the partitions but I was having this problem:

hscroot@localhost:~> chsysstate -r lpar -m 10D400C -o on -n LPAR1
Unable to lock the Service Processor. Perform one of the following steps: (1) Check serial cable connection; (2) Check if another Console is communicating with the Service Processor; (3) Perform the Release Lock task; (4) Perform Rebuild task to re-establish the connection.

I tried again and I got a different error.

hscroot@localhost:~> chsysstate -r lpar -m 10D400C -o on -n LPAR1
Command sent to Service Processor failed. Error Response 4.

To reboot the IBM HMC, type the command below

hscroot@localhost:~> hmcshutdown -t now -r

Broadcast message from root (Sun Jun 6 08:35:38 2010):

The system is going down for reboot NOW!

I had problems with the reboot and asked to power off and power on the HMC. After that I had no more problems.