multipath: /sbin/scsi_id exitted with 1 – cannot get the the wwid for cciss!c0d0

Whenever you run multipath and shows the message cannot get the the wwid for cciss!c0d0

root@linux:~ # multipath -ll oradisk004
/sbin/scsi_id exitted with 1
cannot get the the wwid for cciss!c0d0
oradisk004 (360060e800573b800000073b8000012d2) dm-12 HP,OPEN-V
[size=50G][features=1 queue_if_no_path][hwhandler=1 hp-sw][rw]
\_ round-robin 0 [prio=8][active]
\_ 1:0:0:3 sdd 8:48 [active][ready]
\_ 2:0:0:3 sdh 8:112 [active][ready]

Edit file /etc/multipath.conf and verify if you are blacklisting the cciss drive

blacklist {
devnode “^cciss!c[0-9]d[0-9]*”
}

Using sanlun: NetApp utility for gathering information about LUNs

To map NetApp LUNs, use the command sanlun

root@linux:~ # sanlun lun show
controller(7mode/E-Series)/ device host lun
vserver(cDOT/FlashRay) lun-pathname filename adapter protocol size product
—————————————————————————————————————————————–
dataontap01 /vol/linux_fcdisk5/linux_fcdisk5.lun /dev/sdag host5 FCP 66.0g 7DOT
dataontap01 /vol/linux_fcdisk5/linux_fcdisk5.lun /dev/sdaa host3 FCP 66.0g 7DOT
dataontap01 /vol/linux_fcdisk6/linux_fcdisk6.lun /dev/sdah host5 FCP 142.0g 7DOT
dataontap01 /vol/linux_fcdisk6/linux_fcdisk6.lun /dev/sdab host3 FCP 142.0g 7DOT
dataontap01 /vol/linux_fcdisk4/linux_fcdisk4.lun /dev/sdaf host3 FCP 100g 7DOT
dataontap01 /vol/linux_fcdisk3/linux_fcdisk3.lun /dev/sdae host3 FCP 350.0g 7DOT
dataontap01 /vol/linux_fcdisk2/linux_fcdisk2.lun /dev/sdad host3 FCP 250g 7DOT
dataontap01 /vol/linux_fcdisk1/linux_fcdisk1.lun /dev/sdac host3 FCP 160.0g 7DOT
dataontap01 /vol/linux_fcdisk4/linux_fcdisk4.lun /dev/sdz host5 FCP 100g 7DOT
dataontap01 /vol/linux_fcdisk2/linux_fcdisk2.lun /dev/sdx host5 FCP 250g 7DOT
dataontap01 /vol/linux_fcdisk3/linux_fcdisk3.lun /dev/sdy host5 FCP 350.0g 7DOT
dataontap01 /vol/linux_fcdisk1/linux_fcdisk1.lun /dev/sdw host5 FCP 160.0g 7DOT

To show NetApp LUN multipath information you need to use the command and pass -p command switch

root@linux:~ # sanlun lun show -p

ONTAP Path: dataontap01:/vol/linux_fcdisk2/linux_fcdisk2.lun
LUN: 1
LUN Size: 250g
Controller CF State: Cluster Enabled
Controller Partner: dataontap02
Product: 7DOT
Host Device: VG_2d4b_softwarevg(360a980004257492f5724464c54412d4b)
Multipath Policy: round-robin 0
Multipath Provider: Native
——— ———- ——- ———— ———————————————-
host controller controller
path path /dev/ host target
state type node adapter port
——— ———- ——- ———— ———————————————-
up primary sdx host5 1b
up secondary sdad host3 1b

ONTAP Path: dataontap01:/vol/linux_fcdisk4/linux_fcdisk4.lun
LUN: 3
LUN Size: 100g
Controller CF State: Cluster Enabled
Controller Partner: dataontap02
Product: 7DOT
Host Device: VG_2d4f_softwarevg(360a980004257492f5724464c54412d4f)
Multipath Policy: round-robin 0
Multipath Provider: Native
——— ———- ——- ———— ———————————————-
host controller controller
path path /dev/ host target
state type node adapter port
——— ———- ——- ———— ———————————————-
up primary sdz host5 1b
up secondary sdaf host3 1b

ONTAP Path: dataontap01:/vol/linux_fcdisk1/linux_fcdisk1.lun
LUN: 0
LUN Size: 160.0g
Controller CF State: Cluster Enabled
Controller Partner: dataontap02
Product: 7DOT
Host Device: VG_2d49_softwarevg(360a980004257492f5724464c54412d49)
Multipath Policy: round-robin 0
Multipath Provider: Native
——— ———- ——- ———— ———————————————-
host controller controller
path path /dev/ host target
state type node adapter port
——— ———- ——- ———— ———————————————-
up primary sdw host5 1b
up secondary sdac host3 1b

ONTAP Path: dataontap01:/vol/linux_fcdisk6/linux_fcdisk6.lun
LUN: 5
LUN Size: 142.0g
Controller CF State: Cluster Enabled
Controller Partner: dataontap02
Product: 7DOT
Host Device: VG_2d53_softwarevg(360a980004257492f5724464c54412d53)
Multipath Policy: round-robin 0
Multipath Provider: Native
——— ———- ——- ———— ———————————————-
host controller controller
path path /dev/ host target
state type node adapter port
——— ———- ——- ———— ———————————————-
up primary sdah host5 1b
up secondary sdab host3 1b

ONTAP Path: dataontap01:/vol/linux_fcdisk3/linux_fcdisk3.lun
LUN: 2
LUN Size: 350.0g
Controller CF State: Cluster Enabled
Controller Partner: dataontap02
Product: 7DOT
Host Device: VG_2d4d_softwarevg(360a980004257492f5724464c54412d4d)
Multipath Policy: round-robin 0
Multipath Provider: Native
——— ———- ——- ———— ———————————————-
host controller controller
path path /dev/ host target
state type node adapter port
——— ———- ——- ———— ———————————————-
up primary sdy host5 1b
up secondary sdae host3 1b

ONTAP Path: dataontap01:/vol/linux_fcdisk5/linux_fcdisk5.lun
LUN: 4
LUN Size: 66.0g
Controller CF State: Cluster Enabled
Controller Partner: dataontap02
Product: 7DOT
Host Device: VG_2d51_softwarevg(360a980004257492f5724464c54412d51)
Multipath Policy: round-robin 0
Multipath Provider: Native
——— ———- ——- ———— ———————————————-
host controller controller
path path /dev/ host target
state type node adapter port
——— ———- ——- ———— ———————————————-
up primary sdag host5 1b
up secondary sdaa host3 1b

Trying to install Storage Foundation Basic on CentOS? It doesn’t work

Running Storage Foundation Basic installer was giving me an error message saying perl was not found

root@centos6veritas:/tmp/veritas/dvd3-sfbasic/rhel6_x86_64# ./installer
Error: Cannot find perl to execute ./installer

The installer script checks the binary on the directory of your distribution/perl. Since this is not a genuine Red Hat Enterprise Linux, a workaround is needed.

root@centos6veritas:/tmp/veritas/dvd3-sfbasic/rhel6_x86_64/perl # ln -s RHEL6x8664 SLES10x8664

root@centos6veritas:/tmp/veritas/dvd3-sfbasic/rhel6_x86_64/perl # ls -l
total 4
drwxrwxr-x. 4 root root 4096 Oct 28 07:16 RHEL6x8664
lrwxrwxrwx. 1 root root 10 Feb 21 00:41 SLES10x8664 -> RHEL6x8664

Thanks to the Symantec Storage Foundation 6.2 Release Notes – Linux: Storage Foundation Basic cannot be installed on the Oracle Enterprise Linux platform (3651391) I found why the installer wasn’t working.

But in the end the installer will refuse to install and quit because it expects a Red Hat Enterprise Linux or Oracle Linux

Estimated time remaining: (mm:ss) 0:10 2 of 8

Checking system communication ………………………………………………………………………………………………………………………… Done
Checking release compatibility ……………………………………………………………………………………………………………………… Failed

System verification checks completed

The following errors were discovered on the systems:

CPI ERROR V-9-20-1208 This release is intended to operate on OL and RHEL Linux distributions but centos6veritas is a CentOS system

installer log files and summary file are saved at:

/opt/VRTS/install/logs/installer-201502210042QRK

Would you like to view the summary file? [y,n,q] (n)

Discover disk model to replace in an HP Proliant running Linux

You ran hpacucli to verify the internal disks in an HP Proliant Server and one of them is failed.

root@linux:~ # hpacucli ctrl all show config
Smart Array P410i in Slot 0 (Embedded) (sn: 5001438013FD2D00)

array A (SAS, Unused Space: 0 MB)

logicaldrive 1 (279.4 GB, RAID 1, Interim Recovery Mode)

physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, Failed)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)

SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 5001438013FD2D0F)

To discover the hard disk model, run hpacucli ctrl all show config detail and look for the physical drive model

root@linux:~ # hpacucli ctrl all show config detail

Smart Array P410i in Slot 0 (Embedded)
Bus Interface: PCI
Slot: 0
Serial Number: 5001438013FD2D00
Cache Serial Number: PBCDH0CRH1TCEG
RAID 6 (ADG) Status: Disabled
Controller Status: OK
Hardware Revision: C
Firmware Version: 5.76
Rebuild Priority: Medium
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: OK
Cache Ratio: 25% Read / 75% Write
Drive Write Cache: Disabled
Total Cache Size: 512 MB
Total Cache Memory Available: 400 MB
No-Battery Write Cache: Disabled
Cache Backup Power Source: Capacitors
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK
SATA NCQ Supported: True

Array: A
Interface Type: SAS
Unused Space: 0 MB
Status: Failed Physical Drive
Array Type: Data

One of the drives on this array have failed or has been removed.

Logical Drive: 1
Size: 279.4 GB
Fault Tolerance: 1
Heads: 255
Sectors Per Track: 32
Cylinders: 65535
Strip Size: 256 KB
Full Stripe Size: 256 KB
Status: Interim Recovery Mode
Caching: Enabled
Unique Identifier: 600508B1001CD2D750A3CD6D749784BC
Disk Name: /dev/cciss/c0d0
Mount Points: /boot 250 MB
OS Status: LOCKED
Logical Drive Label: AEB333F050014380108763703899
Mirror Group 0:
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, Failed)
Mirror Group 1:
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)
Drive Type: Data

physicaldrive 1I:1:1
Port: 1I
Box: 1
Bay: 1
Status: Failed
Last Failure Reason: Write retries failed
Drive Type: Data Drive
Interface Type: SAS
Size: 300 GB
Rotational Speed: 10000
Firmware Revision: HPD4
Serial Number: ECA1PC20D4NR1205
Model: HP EG0300FBDSP
Current Temperature (C): 25
Maximum Temperature (C): 36
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown

physicaldrive 1I:1:2
Port: 1I
Box: 1
Bay: 2
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 300 GB
Rotational Speed: 10000
Firmware Revision: HPD4
Serial Number: EB01PC20V2RK1205
Model: HP EG0300FBDSP
Current Temperature (C): 28
Maximum Temperature (C): 35
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown

SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250
Device Number: 250
Firmware Version: RevC
WWID: 5001438013FD2D0F
Vendor ID: PMCSIERA
Model: SRC 8x6G

Then go to HP/Compaq Hard Disk Drives – Hard Drive Model Number Matrix and search for the hard drive model that showed when you got the details.

If you didn’t find the disk on the list, try this other list – HP/Compaq SCSI Hard Disk Drives – Hard Drive Model Number Matrix

Disk showing Failed LUN paths in HP-UX

Displaying Lun paths to the LUN

root@hp-ux:~ # scsimgr -p lun_map -D /dev/rdisk/disk31
lunpath:61:0/2/1/0.0x21230002ac001673.0x4001000000000000:fibre_channel:FAILED:AUTH_FAILED
lunpath:62:0/2/1/0.0x21230002ac001673.0x4002000000000000:fibre_channel:ACTIVE:ACTIVE
lunpath:71:0/5/1/0.0x20240002ac001673.0x4001000000000000:fibre_channel:FAILED:AUTH_FAILED
lunpath:72:0/5/1/0.0x20240002ac001673.0x4002000000000000:fibre_channel:ACTIVE:ACTIVE

Displaying the LUN to the lunpath mapping

root@hp-ux:~ # ioscan -m lun /dev/rdisk/disk31
Class I Lun H/W Path Driver S/W State H/W Type Health Description
=======================================================================
disk 31 64000/0xfa00/0x8c esdisk CLAIMED DEVICE limited 3PARdataVV
0/2/1/0.0x21230002ac001673.0x4001000000000000
0/2/1/0.0x21230002ac001673.0x4002000000000000
0/5/1/0.0x20240002ac001673.0x4001000000000000
0/5/1/0.0x20240002ac001673.0x4002000000000000
/dev/disk/disk31 /dev/rdisk/disk31

root@hp-ux:~ # scsimgr get_info -D /dev/rdisk/disk31

STATUS INFORMATION FOR LUN : /dev/rdisk/disk31

Generic Status Information

SCSI services internal state = ONLINE
Device type = Direct_Access
EVPD page 0x83 description code = 1
EVPD page 0x83 description association = 0
EVPD page 0x83 description type = 3
World Wide Identifier (WWID) = 0x50002ac0031a1673
Serial number = ” 1405747″
Vendor id = “3PARdata”
Product id = “VV ”
Product revision = “3131”
Other properties = “”
SPC protocol revision = 6
Open count (includes chr/blk/pass-thru/class) = 1
Raw open count (includes class/pass-thru) = 0
Pass-thru opens = 0
LUN path count = 4
Active LUN paths = 2
Standby LUN paths = 0
Failed LUN paths = 2
Maximum I/O size allowed = 2097152
Preferred I/O size = 2097152
Outstanding I/Os = 0
I/O load balance policy = round_robin
Path fail threshold time period = 0
Transient time period = 120
Tracing buffer size = 1024
LUN Path used when policy is path_lockdown = NA
LUN access type = NA
Asymmetric logical unit access supported = No
Asymmetric states supported = NA
Preferred paths reported by device = No
Preferred LUN paths = 0

Driver esdisk Status Information :

Capacity in number of blocks = 213909504
Block size in bytes = 512
Number of active IOs = 0
Special properties =
Maximum number of IO retries = 45
IO transfer timeout in secs = 30
FORMAT command timeout in secs = 86400
START UNIT command timeout in secs = 60
Timeout in secs before starting failing IO = 120
IO infinite retries = false

I saw two failed LUN paths. To validate disk paths to the disk I use scsimgr

root@hp-ux:~ # scsimgr -f replace_wwid -D /dev/rdisk/disk31
scsimgr: Successfully validated binding of LUN paths with new LUN.

The invalid ones were removed

root@hp-ux:~ # ioscan -m lun /dev/rdisk/disk31
Class I Lun H/W Path Driver S/W State H/W Type Health Description
======================================================================
disk 31 64000/0xfa00/0x8c esdisk CLAIMED DEVICE online 3PARdataVV
0/2/1/0.0x21230002ac001673.0x4002000000000000
0/5/1/0.0x20240002ac001673.0x4002000000000000
/dev/disk/disk31 /dev/rdisk/disk31

hpacucli Error: No controllers detected

I have a HP Proliant server that is using hardware RAID for the boot disk but when I used hpacucli no status was shown

root@linux:~ # hpacucli ctrl all show config

Error: No controllers detected.

Verified if I have RAID and the server model

root@linux:~ # lspci | grep -i raid
02:00.0 RAID bus controller: Hewlett-Packard Company Smart Array Gen8 Controllers (rev 01)

root@linux:~ # dmidecode | grep -i proliant
Product Name: ProLiant DL380p Gen8
Family: ProLiant

I had hpacucli 8.70 and I read that old versions that causes problems like this. So I uninstalled

root@linux:~ # rpm -qa | grep hpacucli
hpacucli-8.70-8.0.i386

root@linux:~ # rpm -e hpacucli

And installed the newest version

root@linux:~ # rpm -ivh /tmp/hpacucli-9.40-12.0.x86_64.rpm
Preparing… ########################################### [100%]
1:hpacucli ########################################### [100%]

DOWNGRADE NOTE: To downgrade this application to any version prior to 9.10.x.x, the current RPM must be manually uninstalled using the “rpm -e” command before any prior versions can be installed.

LOCKING NOTE: The locking mechanism starting with versions 9.10.X.X, are not compatible with prior versions of the applications. Therefore, mixing older and newer versions of the various applications (ACU, HPACUCLI, HPACUSCRIPTING) is not recommended.

After this, the utility was showing my drives

root@linux:~ # hpacucli ctrl all show config

Smart Array P420i in Slot 0 (Embedded) (sn: 001438027AE8760)

array A (SAS, Unused Space: 0 MB)

logicaldrive 1 (279.4 GB, RAID 1, OK)

physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS, 300 GB, OK)
physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS, 300 GB, OK)

SEP (Vendor ID PMCSIERA, Model SRCv8x6G) 380 (WWID: 5001438027AE876F)

Installing Oracle Java 8 Update 31 plugin in Firefox – Linux

I have a Red Hat Enterprise Linux 5 and I want to install Java 8 with the browser plugin.

root@linux:~ # cat /etc/*release
Red Hat Enterprise Linux Server release 5.8 (Tikanga)

In order to do that, go to http://www.java.com/en/download/manual.jsp and download the corresponding version according to your machine. I choose Linux x64 RPM

root@linux:/tmp # rpm -ivh jre-8u31-linux-x64.rpm
Preparing… ########################################### [100%]
1:jre1.8.0_31 ########################################### [100%]
Unpacking JAR files…
rt.jar…
jsse.jar…
charsets.jar…
localedata.jar…
jfxrt.jar…

Go to /usr/lib64/mozilla/plugins and create a link

root@linux:/usr/lib64/mozilla/plugins # ln -s /usr/java/jre1.8.0_31/lib/amd64/libnpjp2.so libnpjp2.so
root@linux:/usr/lib64/mozilla/plugins # ls -l
total 0
lrwxrwxrwx 1 root root 41 Jun 9 2014 libflashplayer.so -> /usr/lib64/flash-plugin/libflashplayer.so
lrwxrwxrwx 1 root root 41 Jul 7 2014 libjavaplugin.so -> /etc/alternatives/libjavaplugin.so.x86_64
lrwxrwxrwx 1 root root 43 Jan 28 15:18 libnpjp2.so -> /usr/java/jre1.8.0_31/lib/amd64/libnpjp2.so

Verify if the plugin is being loaded typing about:plugins in the address bar

Suse Linux 11 stuck at boot: Set System Time to the current Hardware Clock

I was having a problem with a SLES 11 running under VMware where it stuck at boot.
Set System Time to the current Hardware Clock
To solve this problem I booted in single user mode and moved the script that sets the time

root@linux:~ # mv /etc/init.d/boot.clock /root

How to remove Default Profile button that showed up in Google Chrome

You are horrified that this button have appeared.
Default Profile
To disable it, type in the address bar chrome://flags/#enable-new-avatar-menu and select Disabled
Default Profile Flags
Click on the Relaunch now button that showed on the botton of the screen.

pvmove: Insufficient free space with both disks the same size

I was migrating some LUNs to another storage.

After creating the LUNs with the same size, I intended to move the data to the new disk but I was getting there was insufficient free space

root@linux:~ # pvmove /dev/mapper/bkpcvrdd01 /dev/mapper/bkpcvrdd01newp1
Insufficient free space: 12800 extents needed, but only 12799 available
Unable to allocate mirror extents for pvmove0.
Failed to convert pvmove LV to mirrored

What happens is that the source disk is bigger than the new disk.

root@linux:~ # pvdisplay /dev/mapper/bkpcvrdd01 /dev/mapper/bkpcvrdd01newp1
— Physical volume —
PV Name /dev/mapper/bkpcvrdd01
VG Name bkpcvrdvg
PV Size 50.00 GB / not usable 640.00 KB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 12800
Free PE 0
Allocated PE 12800
PV UUID giRfdd-oDKH-12W7-ObwE-nFoN-nF0L-3OFlK5

— Physical volume —
PV Name /dev/mapper/bkpcvrdd01newp1
VG Name bkpcvrdvg
PV Size 50.00 GB / not usable 3.31 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 12799
Free PE 12799
Allocated PE 0
PV UUID MXxLNF-hDbU-eaEp-0wyW-ly1l-N1FJ-Bh2fNk

Both are the same size but due to the physical structure, each disk shows a different size.

Run fdisk -l and observe the size. This is the disk after a LUN expansion.

root@linux:~ # fdisk -l /dev/mapper/bkpcvrdd01new

Disk /dev/mapper/bkpcvrdd01new: 54.2 GB, 54223962112 bytes
255 heads, 63 sectors/track, 6592 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/mapper/bkpcvrdd01newp1 1 6592 52950208+ 8e Linux LVM

Now compare both disk sizes:

53.6 GB, 53687746560 bytes
53.6 GB, 53686370304 bytes

Follow

Get every new post delivered to your Inbox.

Join 2,146 other followers

%d bloggers like this: