UXMON: NTP Problems. Running on local time. Peer: 127.127.1.0 Cur. Offset: 0.000 Cur.

Having this problem alerted on a Linux system

Node : linux.setaoffice.com
Node Type : Intel/AMD x64(HTTPS)
Severity : minor
OM Server Time: 2015-03-24 19:54:33
Message : UXMON: NTP Problems. Running on local time. Peer: 127.127.1.0 Cur. Offset: 0.000 Cur. Symbol: * Ref. ID: .LOCL.
Msg Group : OS
Application : ntpmon
Object : ntpq
Event Type :
not_found

Instance Name :
not_found

Instruction : This message shows no valid peers

Please, contact with your UX expert

Please check /var/opt/OV/log/OpC/ntp_mon.log for more details

Verifying the log file I noticed the following

root@linux:~ # tail -50 /var/opt/OV/log/OpC/ntp_mon.log
NTP Problems. Running on local time. Peer: 127.127.1.0 Cur. Offset: 0.000 Cur. Symbol: * Ref. ID: .LOCL.”

It is using the fake driver even if it is configured to use 4 NTP servers configured

root@linux:~ # ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*LOCAL(0) .LOCL. 10 l 38 64 377 0.000 0.000 0.001
NTP1server 172.16.73.51 4 u 5 16 1 31.138 -15.569 0.001
NTP2server 172.16.73.9 3 u 4 16 1 26.626 -15.001 0.001
NTP3server 172.16.73.50 5 u 3 16 1 23.416 -16.803 0.001
NTP4server 172.16.73.51 4 u 2 16 1 28.340 -10.889 0.001

Stopped NTP service and it was still listening

root@linux:~ # service ntp stop
Shutting down network time protocol daemon (NTPD) done

root@linux:~ # ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*LOCAL(0) .LOCL. 10 l 65 64 377 0.000 0.000 0.001
NTP1server 172.16.73.51 4 u – 16 377 30.697 -18.969 2.157
NTP2server 172.16.73.9 3 u 13 16 377 26.172 -1.256 1.121
NTP3server 172.16.73.50 5 u 11 16 377 23.315 -12.326 8.230
NTP4server 172.16.73.51 4 u 16 16 377 28.309 -11.716 2.544

Found that there was already another NTP process running

root@linux:~ # ps -ef | grep ntp
root 23339 1 0 2014 ? 00:38:38 ntpd -pq
root 30880 23657 0 10:07 pts/8 00:00:00 grep ntp

Killed the process

root@linux:~ # kill 23339
root@linux:~ # ps -ef | grep ntp
root 30918 23657 0 10:07 pts/8 00:00:00 grep ntp

NTP stopped listening

root@linux:~ # ntpq -p
ntpq: read: Connection refused

Started NTO service

root@linux:~ # service ntp start
Starting network time protocol daemon (NTPD) done

root@linux:~ # ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
NTP1server 172.16.73.51 4 u 5 16 1 31.138 -15.569 0.001
NTP2server 172.16.73.9 3 u 4 16 1 26.626 -15.001 0.001
NTP3server 172.16.73.50 5 u 3 16 1 23.416 -16.803 0.001
NTP4server 172.16.73.51 4 u 2 16 1 28.340 -10.889 0.001

HPOM – UXMON: File /var/log/messages age exceeds 1d threshold.

Removed file /var/opt/OV/conf/OpC/act_mon.cfg that had some configuration about the file /var/log/messages

################################################################################
#
# The intention of this script is to monitor the last modification time of a
# file, or to monitor its size. This is used to supervise other programs or
# scripts which have to write regularly to their logfile. If a program or a
# script doesn’t modify “its” file, there is probably something wrong with this process.
#
# If the configured interval is exceeded for the file which is intended to be
# monitored, or if the size is above or below the configured limit (depending
# on whether the size threshold has a < or > modifier),
# a log-message is written
#
#
################################################################################

[LINUX]
/var/log/cron 2d WARNING 0000-2400 * TT_LINUX
/var/log/messages 1d warning 0000-2400 * TT_LINUX

ia64dsk: The disk for dev_t d00009b appears to have grown since the partition table was written.

We are seeing a message about a disk to have grown since the partition table was written.

We had a storage migration using an HP MPX200 router

root@hp-ux:~ # dmesg

Mar 19 14:33

09b appears to have grown since the partition table was written.
ia64dsk: The disk for dev_t d00009b appears to have grown since the partition table was written.

To identify the disk, check /dev

root@hp-ux:~ # ls -ltraR /dev | grep 009b
crw-r—– 1 bin sys 13 0x00009b Feb 12 14:29 disk30
brw-r—– 1 bin sys 1 0x00009b Feb 12 14:29 disk30

According to idisk, we need to use -G option but it is not documented on the manual page

root@hp-ux:~ # idisk -p /dev/rdisk/disk30
idisk version: 1.44

EFI Primary Header:
Signature = EFI PART
Revision = 0x10000
HeaderSize = 0x5c
HeaderCRC32 = 0x65182f71
MyLbaLo = 0x1
MyLbaHi = 0x0
AlternateLbaLo = 0x117fffff
AlternateLbaHi = 0x0
FirstUsableLbaLo = 0x40
FirstUsableLbaHi = 0x0
LastUsableLbaLo = 0x117fffbf
LastUsableLbaHi = 0x0
Disk GUID = 9d7ea890-0835-11e4-8000-d6217b60e588
PartitionEntryLbaLo = 0x2
PartitionEntryLbaHi = 0x0
NumberOfPartitionEntries = 0xc
SizeOfPartitionEntry = 0x80
PartitionEntryArrayCRC32 = 0x6c45b3e0

Primary Partition Table (in 512 byte blocks):
Partition 1 (EFI):
Partition Type GUID = c12a7328-f81f-11d2-ba4b-00a0c93ec93b
Unique Partition GUID = 9d7eaaf2-0835-11e4-8000-d6217b60e588
Starting Lba Lo = 0x40
Starting Lba Hi = 0x0
Ending Lba Lo = 0xf9fff
Ending Lba Hi = 0x0
Partition 2 (HP-UX):
Partition Type GUID = 75894c1e-3aeb-11d3-b7c1-7b03a0000000
Unique Partition GUID = 9d7eab10-0835-11e4-8000-d6217b60e588
Starting Lba Lo = 0xfa000
Starting Lba Hi = 0x0
Ending Lba Lo = 0x117377ff
Ending Lba Hi = 0x0
Partition 3 (HPSP):
Partition Type GUID = e2a1e728-32e3-11d6-a682-7b03a0000000
Unique Partition GUID = 9d7eab24-0835-11e4-8000-d6217b60e588
Starting Lba Lo = 0x11737800
Starting Lba Hi = 0x0
Ending Lba Lo = 0x117ff7ff
Ending Lba Hi = 0x0
NOTE: This disk appears to have grown since the partition
table was written. Use idisk with the -G option to
extend the usable space to fill the disk.

There should be a difference in cylinders and blocks between the two disks that is causing this message

Linux LVM: pvs showing dm-XX. How to map and show the friendly name (alias)

When listing the disks used by LVM and it shows only dm, you can make it display the multipath friendly name

root@linux:~ # pvs PV VG Fmt Attr PSize PFree /dev/cciss/c0d0p2 rootvg lvm2 a- 135.69G 11.69G /dev/dm-0 softwarevg lvm2 a- 100.00G 30.00G /dev/dm-1 softwarevg lvm2 a- 100.00G 0 /dev/dm-2 softwarevg lvm2 a- 100.00G 0 /dev/dm-33 softwarevg lvm2 a- 100.99G 100.99G /dev/dm-6 bkpcvrdvg lvm2 a- 50.00G 0 /dev/dm-7 softwarevg lvm2 a- 100.00G 0 /dev/dm-8 softwarevg lvm2 a- 50.00G 0

Edit file /etc/lvm/lvm.conf and tell LVM to not use the cache file. Set write_cache_state to 0

root@linux:~ # vi /etc/lvm/lvm.conf write_cache_state = 0

And delete the cache file

root@linux:~ # rm /etc/lvm/.cache
or
root@linux:~ # rm /etc/lvm/cache/.cache

It should be displaying with the friendly names

root@linux:~ # pvs
PV VG Fmt Attr PSize PFree
/dev/cciss/c0d0p2 rootvg lvm2 a- 135.69G 11.69G
/dev/mapper/bkpdisk01-part1 bkpcvrdvg lvm2 a- 50.00G 0
/dev/mapper/sfwdisk01-part1 softwarevg lvm2 a- 100.00G 0
/dev/mapper/sfwdisk02-part1 softwarevg lvm2 a- 50.00G 0
/dev/mapper/sfwdisk03 softwarevg lvm2 a- 100.00G 0
/dev/mapper/sfwdisk04 softwarevg lvm2 a- 100.00G 30.00G
/dev/mapper/sfwdisk05 softwarevg lvm2 a- 100.00G 0
/dev/mapper/sfwdisk05_NEW1 softwarevg lvm2 a- 100.99G 100.99G

If you need to map, go to /dev/mapper and do a long listing. Search for the number after the 253,. Eg. dm-33 is the sfwdisk05_NEW1

root@linux:/dev/mapper # ls -l total 0
brw——- 1 root root 253, 18 Dec 16 10:56 bkpcvrdvg-apliclv
brw——- 1 root root 253, 19 Dec 16 10:56 bkpcvrdvg-bkpcvrdlv
brw——- 1 root root 253, 3 Dec 16 10:56 bkpdisk01
brw——- 1 root root 253, 6 Dec 16 10:56 bkpdisk01-part1
lrwxrwxrwx 1 root root 16 Dec 16 10:56 control -> ../device-mapper
brw——- 1 root root 253, 28 Mar 13 05:45 mpathe
brw——- 1 root root 253, 29 Mar 13 05:45 mpathf
brw——- 1 root root 253, 30 Mar 13 05:45 mpathg
brw——- 1 root root 253, 31 Mar 13 05:45 mpathh
brw——- 1 root root 253, 32 Mar 13 05:45 mpathi
brw——- 1 root root 253, 9 Dec 16 10:57 rootvg-auditlv
brw——- 1 root root 253, 10 Dec 16 10:57 rootvg-locallv
brw——- 1 root root 253, 11 Dec 16 10:57 rootvg-optlv
brw——- 1 root root 253, 12 Dec 16 10:56 rootvg-rootlv
brw——- 1 root root 253, 13 Dec 16 10:56 rootvg-swaplv
brw——- 1 root root 253, 14 Dec 16 10:57 rootvg-tmplv
brw——- 1 root root 253, 15 Dec 16 10:58 rootvg-userslv
brw——- 1 root root 253, 16 Dec 16 10:58 rootvg-usrlv
brw——- 1 root root 253, 17 Dec 16 10:58 rootvg-varlv
brw——- 1 root root 253, 4 Dec 16 10:56 sfwdisk01
brw——- 1 root root 253, 7 Dec 16 10:56 sfwdisk01-part1
brw——- 1 root root 253, 5 Dec 16 10:56 sfwdisk02
brw——- 1 root root 253, 8 Dec 16 10:56 sfwdisk02-part1
brw——- 1 root root 253, 1 Dec 16 10:56 sfwdisk03
brw——- 1 root root 253, 0 Dec 16 10:56 sfwdisk04
brw——- 1 root root 253, 2 Dec 16 10:56 sfwdisk05
brw——- 1 root root 253, 27 Mar 13 09:24 sfwdisk05_NEW
brw——- 1 root root 253, 33 Mar 13 09:26 sfwdisk05_NEW1
brw——- 1 root root 253, 25 Dec 16 12:25 softwarevg-applv
brw——- 1 root root 253, 26 Dec 16 10:56 softwarevg-arqlv
brw——- 1 root root 253, 21 Dec 16 10:56 softwarevg-deploylv
brw——- 1 root root 253, 22 Dec 16 10:56 softwarevg-logslv
brw——- 1 root root 253, 24 Dec 16 10:56 softwarevg-oraclelv
brw——- 1 root root 253, 20 Dec 16 10:56 softwarevg-softwarelv
brw——- 1 root root 253, 23 Dec 16 10:56 softwarevg-transferlv

Error when updating Google Chrome: Public key for google-chrome-release-version.x86_64.rpm is not installed

When you update your Fedora desktop, you may receive a message complaining about a public key for Google Chrome not installed on your system

Public key for google-chrome-stable-41.0.2272.76-1.x86_64.rpm is not installed

Update the repository file and add the gpgkey parameter. If you want to install, you can also create this file and then yum install google-chrome-stable

root@fedora:~ # vi /etc/yum.repos.d/google-chrome.repo

[google-chrome]
name=google-chrome
baseurl=http://dl.google.com/linux/chrome/rpm/stable/x86_64
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub

Then run again yum update or yum install google-chrome-stable and it will automatically import the public key

warning: /var/cache/yum/x86_64/21/google-chrome/packages/google-chrome-stable-41.0.2272.76-1.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 7fac5991: NOKEY
Retrieving key from https://dl-ssl.google.com/linux/linux_signing_key.pub
Importing GPG key 0x7FAC5991:
Userid : “Google, Inc. Linux Package Signing Key <linux-packages-keymaster@google.com>”
Fingerprint: 4cca 1eaf 950c ee4a b839 76dc a040 830f 7fac 5991
From : https://dl-ssl.google.com/linux/linux_signing_key.pub

multipath: /sbin/scsi_id exitted with 1 – cannot get the the wwid for cciss!c0d0

Whenever you run multipath and shows the message cannot get the the wwid for cciss!c0d0

root@linux:~ # multipath -ll oradisk004
/sbin/scsi_id exitted with 1
cannot get the the wwid for cciss!c0d0
oradisk004 (360060e800573b800000073b8000012d2) dm-12 HP,OPEN-V
[size=50G][features=1 queue_if_no_path][hwhandler=1 hp-sw][rw]
\_ round-robin 0 [prio=8][active]
\_ 1:0:0:3 sdd 8:48 [active][ready]
\_ 2:0:0:3 sdh 8:112 [active][ready]

Edit file /etc/multipath.conf and verify if you are blacklisting the cciss drive

blacklist {
devnode “^cciss!c[0-9]d[0-9]*”
}

Using sanlun: NetApp utility for gathering information about LUNs

To map NetApp LUNs, use the command sanlun

root@linux:~ # sanlun lun show
controller(7mode/E-Series)/ device host lun
vserver(cDOT/FlashRay) lun-pathname filename adapter protocol size product
—————————————————————————————————————————————–
dataontap01 /vol/linux_fcdisk5/linux_fcdisk5.lun /dev/sdag host5 FCP 66.0g 7DOT
dataontap01 /vol/linux_fcdisk5/linux_fcdisk5.lun /dev/sdaa host3 FCP 66.0g 7DOT
dataontap01 /vol/linux_fcdisk6/linux_fcdisk6.lun /dev/sdah host5 FCP 142.0g 7DOT
dataontap01 /vol/linux_fcdisk6/linux_fcdisk6.lun /dev/sdab host3 FCP 142.0g 7DOT
dataontap01 /vol/linux_fcdisk4/linux_fcdisk4.lun /dev/sdaf host3 FCP 100g 7DOT
dataontap01 /vol/linux_fcdisk3/linux_fcdisk3.lun /dev/sdae host3 FCP 350.0g 7DOT
dataontap01 /vol/linux_fcdisk2/linux_fcdisk2.lun /dev/sdad host3 FCP 250g 7DOT
dataontap01 /vol/linux_fcdisk1/linux_fcdisk1.lun /dev/sdac host3 FCP 160.0g 7DOT
dataontap01 /vol/linux_fcdisk4/linux_fcdisk4.lun /dev/sdz host5 FCP 100g 7DOT
dataontap01 /vol/linux_fcdisk2/linux_fcdisk2.lun /dev/sdx host5 FCP 250g 7DOT
dataontap01 /vol/linux_fcdisk3/linux_fcdisk3.lun /dev/sdy host5 FCP 350.0g 7DOT
dataontap01 /vol/linux_fcdisk1/linux_fcdisk1.lun /dev/sdw host5 FCP 160.0g 7DOT

To show NetApp LUN multipath information you need to use the command and pass -p command switch

root@linux:~ # sanlun lun show -p

ONTAP Path: dataontap01:/vol/linux_fcdisk2/linux_fcdisk2.lun
LUN: 1
LUN Size: 250g
Controller CF State: Cluster Enabled
Controller Partner: dataontap02
Product: 7DOT
Host Device: VG_2d4b_softwarevg(360a980004257492f5724464c54412d4b)
Multipath Policy: round-robin 0
Multipath Provider: Native
——— ———- ——- ———— ———————————————-
host controller controller
path path /dev/ host target
state type node adapter port
——— ———- ——- ———— ———————————————-
up primary sdx host5 1b
up secondary sdad host3 1b

ONTAP Path: dataontap01:/vol/linux_fcdisk4/linux_fcdisk4.lun
LUN: 3
LUN Size: 100g
Controller CF State: Cluster Enabled
Controller Partner: dataontap02
Product: 7DOT
Host Device: VG_2d4f_softwarevg(360a980004257492f5724464c54412d4f)
Multipath Policy: round-robin 0
Multipath Provider: Native
——— ———- ——- ———— ———————————————-
host controller controller
path path /dev/ host target
state type node adapter port
——— ———- ——- ———— ———————————————-
up primary sdz host5 1b
up secondary sdaf host3 1b

ONTAP Path: dataontap01:/vol/linux_fcdisk1/linux_fcdisk1.lun
LUN: 0
LUN Size: 160.0g
Controller CF State: Cluster Enabled
Controller Partner: dataontap02
Product: 7DOT
Host Device: VG_2d49_softwarevg(360a980004257492f5724464c54412d49)
Multipath Policy: round-robin 0
Multipath Provider: Native
——— ———- ——- ———— ———————————————-
host controller controller
path path /dev/ host target
state type node adapter port
——— ———- ——- ———— ———————————————-
up primary sdw host5 1b
up secondary sdac host3 1b

ONTAP Path: dataontap01:/vol/linux_fcdisk6/linux_fcdisk6.lun
LUN: 5
LUN Size: 142.0g
Controller CF State: Cluster Enabled
Controller Partner: dataontap02
Product: 7DOT
Host Device: VG_2d53_softwarevg(360a980004257492f5724464c54412d53)
Multipath Policy: round-robin 0
Multipath Provider: Native
——— ———- ——- ———— ———————————————-
host controller controller
path path /dev/ host target
state type node adapter port
——— ———- ——- ———— ———————————————-
up primary sdah host5 1b
up secondary sdab host3 1b

ONTAP Path: dataontap01:/vol/linux_fcdisk3/linux_fcdisk3.lun
LUN: 2
LUN Size: 350.0g
Controller CF State: Cluster Enabled
Controller Partner: dataontap02
Product: 7DOT
Host Device: VG_2d4d_softwarevg(360a980004257492f5724464c54412d4d)
Multipath Policy: round-robin 0
Multipath Provider: Native
——— ———- ——- ———— ———————————————-
host controller controller
path path /dev/ host target
state type node adapter port
——— ———- ——- ———— ———————————————-
up primary sdy host5 1b
up secondary sdae host3 1b

ONTAP Path: dataontap01:/vol/linux_fcdisk5/linux_fcdisk5.lun
LUN: 4
LUN Size: 66.0g
Controller CF State: Cluster Enabled
Controller Partner: dataontap02
Product: 7DOT
Host Device: VG_2d51_softwarevg(360a980004257492f5724464c54412d51)
Multipath Policy: round-robin 0
Multipath Provider: Native
——— ———- ——- ———— ———————————————-
host controller controller
path path /dev/ host target
state type node adapter port
——— ———- ——- ———— ———————————————-
up primary sdag host5 1b
up secondary sdaa host3 1b

Trying to install Storage Foundation Basic on CentOS? It doesn’t work

Running Storage Foundation Basic installer was giving me an error message saying perl was not found

root@centos6veritas:/tmp/veritas/dvd3-sfbasic/rhel6_x86_64# ./installer
Error: Cannot find perl to execute ./installer

The installer script checks the binary on the directory of your distribution/perl. Since this is not a genuine Red Hat Enterprise Linux, a workaround is needed.

root@centos6veritas:/tmp/veritas/dvd3-sfbasic/rhel6_x86_64/perl # ln -s RHEL6x8664 SLES10x8664

root@centos6veritas:/tmp/veritas/dvd3-sfbasic/rhel6_x86_64/perl # ls -l
total 4
drwxrwxr-x. 4 root root 4096 Oct 28 07:16 RHEL6x8664
lrwxrwxrwx. 1 root root 10 Feb 21 00:41 SLES10x8664 -> RHEL6x8664

Thanks to the Symantec Storage Foundation 6.2 Release Notes – Linux: Storage Foundation Basic cannot be installed on the Oracle Enterprise Linux platform (3651391) I found why the installer wasn’t working.

But in the end the installer will refuse to install and quit because it expects a Red Hat Enterprise Linux or Oracle Linux

Estimated time remaining: (mm:ss) 0:10 2 of 8

Checking system communication ………………………………………………………………………………………………………………………… Done
Checking release compatibility ……………………………………………………………………………………………………………………… Failed

System verification checks completed

The following errors were discovered on the systems:

CPI ERROR V-9-20-1208 This release is intended to operate on OL and RHEL Linux distributions but centos6veritas is a CentOS system

installer log files and summary file are saved at:

/opt/VRTS/install/logs/installer-201502210042QRK

Would you like to view the summary file? [y,n,q] (n)

Discover disk model to replace in an HP Proliant running Linux

You ran hpacucli to verify the internal disks in an HP Proliant Server and one of them is failed.

root@linux:~ # hpacucli ctrl all show config
Smart Array P410i in Slot 0 (Embedded) (sn: 5001438013FD2D00)

array A (SAS, Unused Space: 0 MB)

logicaldrive 1 (279.4 GB, RAID 1, Interim Recovery Mode)

physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, Failed)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)

SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 5001438013FD2D0F)

To discover the hard disk model, run hpacucli ctrl all show config detail and look for the physical drive model

root@linux:~ # hpacucli ctrl all show config detail

Smart Array P410i in Slot 0 (Embedded)
Bus Interface: PCI
Slot: 0
Serial Number: 5001438013FD2D00
Cache Serial Number: PBCDH0CRH1TCEG
RAID 6 (ADG) Status: Disabled
Controller Status: OK
Hardware Revision: C
Firmware Version: 5.76
Rebuild Priority: Medium
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: OK
Cache Ratio: 25% Read / 75% Write
Drive Write Cache: Disabled
Total Cache Size: 512 MB
Total Cache Memory Available: 400 MB
No-Battery Write Cache: Disabled
Cache Backup Power Source: Capacitors
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK
SATA NCQ Supported: True

Array: A
Interface Type: SAS
Unused Space: 0 MB
Status: Failed Physical Drive
Array Type: Data

One of the drives on this array have failed or has been removed.

Logical Drive: 1
Size: 279.4 GB
Fault Tolerance: 1
Heads: 255
Sectors Per Track: 32
Cylinders: 65535
Strip Size: 256 KB
Full Stripe Size: 256 KB
Status: Interim Recovery Mode
Caching: Enabled
Unique Identifier: 600508B1001CD2D750A3CD6D749784BC
Disk Name: /dev/cciss/c0d0
Mount Points: /boot 250 MB
OS Status: LOCKED
Logical Drive Label: AEB333F050014380108763703899
Mirror Group 0:
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, Failed)
Mirror Group 1:
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)
Drive Type: Data

physicaldrive 1I:1:1
Port: 1I
Box: 1
Bay: 1
Status: Failed
Last Failure Reason: Write retries failed
Drive Type: Data Drive
Interface Type: SAS
Size: 300 GB
Rotational Speed: 10000
Firmware Revision: HPD4
Serial Number: ECA1PC20D4NR1205
Model: HP EG0300FBDSP
Current Temperature (C): 25
Maximum Temperature (C): 36
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown

physicaldrive 1I:1:2
Port: 1I
Box: 1
Bay: 2
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 300 GB
Rotational Speed: 10000
Firmware Revision: HPD4
Serial Number: EB01PC20V2RK1205
Model: HP EG0300FBDSP
Current Temperature (C): 28
Maximum Temperature (C): 35
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown

SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250
Device Number: 250
Firmware Version: RevC
WWID: 5001438013FD2D0F
Vendor ID: PMCSIERA
Model: SRC 8x6G

Then go to HP/Compaq Hard Disk Drives – Hard Drive Model Number Matrix and search for the hard drive model that showed when you got the details.

If you didn’t find the disk on the list, try this other list – HP/Compaq SCSI Hard Disk Drives – Hard Drive Model Number Matrix

Disk showing Failed LUN paths in HP-UX

Displaying Lun paths to the LUN

root@hp-ux:~ # scsimgr -p lun_map -D /dev/rdisk/disk31
lunpath:61:0/2/1/0.0x21230002ac001673.0x4001000000000000:fibre_channel:FAILED:AUTH_FAILED
lunpath:62:0/2/1/0.0x21230002ac001673.0x4002000000000000:fibre_channel:ACTIVE:ACTIVE
lunpath:71:0/5/1/0.0x20240002ac001673.0x4001000000000000:fibre_channel:FAILED:AUTH_FAILED
lunpath:72:0/5/1/0.0x20240002ac001673.0x4002000000000000:fibre_channel:ACTIVE:ACTIVE

Displaying the LUN to the lunpath mapping

root@hp-ux:~ # ioscan -m lun /dev/rdisk/disk31
Class I Lun H/W Path Driver S/W State H/W Type Health Description
=======================================================================
disk 31 64000/0xfa00/0x8c esdisk CLAIMED DEVICE limited 3PARdataVV
0/2/1/0.0x21230002ac001673.0x4001000000000000
0/2/1/0.0x21230002ac001673.0x4002000000000000
0/5/1/0.0x20240002ac001673.0x4001000000000000
0/5/1/0.0x20240002ac001673.0x4002000000000000
/dev/disk/disk31 /dev/rdisk/disk31

root@hp-ux:~ # scsimgr get_info -D /dev/rdisk/disk31

STATUS INFORMATION FOR LUN : /dev/rdisk/disk31

Generic Status Information

SCSI services internal state = ONLINE
Device type = Direct_Access
EVPD page 0x83 description code = 1
EVPD page 0x83 description association = 0
EVPD page 0x83 description type = 3
World Wide Identifier (WWID) = 0x50002ac0031a1673
Serial number = ” 1405747″
Vendor id = “3PARdata”
Product id = “VV ”
Product revision = “3131”
Other properties = “”
SPC protocol revision = 6
Open count (includes chr/blk/pass-thru/class) = 1
Raw open count (includes class/pass-thru) = 0
Pass-thru opens = 0
LUN path count = 4
Active LUN paths = 2
Standby LUN paths = 0
Failed LUN paths = 2
Maximum I/O size allowed = 2097152
Preferred I/O size = 2097152
Outstanding I/Os = 0
I/O load balance policy = round_robin
Path fail threshold time period = 0
Transient time period = 120
Tracing buffer size = 1024
LUN Path used when policy is path_lockdown = NA
LUN access type = NA
Asymmetric logical unit access supported = No
Asymmetric states supported = NA
Preferred paths reported by device = No
Preferred LUN paths = 0

Driver esdisk Status Information :

Capacity in number of blocks = 213909504
Block size in bytes = 512
Number of active IOs = 0
Special properties =
Maximum number of IO retries = 45
IO transfer timeout in secs = 30
FORMAT command timeout in secs = 86400
START UNIT command timeout in secs = 60
Timeout in secs before starting failing IO = 120
IO infinite retries = false

I saw two failed LUN paths. To validate disk paths to the disk I use scsimgr

root@hp-ux:~ # scsimgr -f replace_wwid -D /dev/rdisk/disk31
scsimgr: Successfully validated binding of LUN paths with new LUN.

The invalid ones were removed

root@hp-ux:~ # ioscan -m lun /dev/rdisk/disk31
Class I Lun H/W Path Driver S/W State H/W Type Health Description
======================================================================
disk 31 64000/0xfa00/0x8c esdisk CLAIMED DEVICE online 3PARdataVV
0/2/1/0.0x21230002ac001673.0x4002000000000000
0/5/1/0.0x20240002ac001673.0x4002000000000000
/dev/disk/disk31 /dev/rdisk/disk31

Follow

Get every new post delivered to your Inbox.

Join 3,063 other followers

%d bloggers like this: