Advertisements

Tag Archives: mount

Creating a jfs2 filesystem in AIX

Listing physical volumes

root@aix:/ # lspv
hdisk0 00c94ad454a2d4c5 rootvg active
hdisk1 00c94ad45808a18f rootvg active
hdisk3 00c94ad4229190d8 tsmpoolvg active
hdisk21 00ce196f4b9604c3 aplicvg active
hdisk22 00ce196f418e3f6d aplicvg active
hdisk44 00c94ad4c75dfb09 aplicvg active
hdisk2 none None
hdisk57 00c94ad481c4f2aa aplicvg active
hdisk55 00c94ad4f99f7480 tsm55dbvg active
hdisk56 00c94ad4f99f2d43 tsm55logvg active

Configures devices

root@aix:/ # cfgmgr

Listing physical volumes

root@aix:/ # lspv
hdisk0 00c94ad454a2d4c5 rootvg active
hdisk1 00c94ad45808a18f rootvg active
hdisk3 00c94ad4229190d8 tsmpoolvg active
hdisk21 00ce196f4b9604c3 aplicvg active
hdisk22 00ce196f418e3f6d aplicvg active
hdisk44 00c94ad4c75dfb09 aplicvg active
hdisk2 none None
hdisk57 00c94ad481c4f2aa aplicvg active
hdisk55 00c94ad4f99f7480 tsm55dbvg active
hdisk56 00c94ad4f99f2d43 tsm55logvg active
hdisk4 none None

Comparing lspv output, the new disk is hdisk4. Checking ID to see if it matches

root@aix:/ # lsattr -El hdisk4 | grep -i 600507680191818C1000000000000C98
unique_id 33213600507680191818C1000000000000C9804214503IBMfcp Device Unique Identification False

Using script to query all disks

for i in `lspv | awk ‘{print $1’}`
do
echo $i `lsattr -El $i | grep unique_id`
done

Creating volume group with PP SIZE 16MB is not possible

root@aix:/ # mkvg -y tsmdbtmp -s 16 hdisk4
0516-1254 mkvg: Changing the PVID in the ODM.
0516-1208 mkvg: Warning, The Physical Partition Size of 16 requires the
creation of 32000 partitions for hdisk4. The system limitation is 16256
physical partitions per disk at a factor value of 16. Specify a larger
Physical Partition Size or a larger factor value in order create a
volume group on this disk.
0516-862 mkvg: Unable to create volume group.

Creating volume group with PP SIZE 32MB.

root@aix:/ # mkvg -y tsmdbtmp -s 32 hdisk4
tsmdbtmp

Listing volume group information

root@aix:/ # lsvg tsmdbtmp
VOLUME GROUP: tsmdbtmp VG IDENTIFIER: 00c94ad400004c000000015d801914eb
VG STATE: active PP SIZE: 32 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 15999 (511968 megabytes)
MAX LVs: 256 FREE PPs: 15999 (511968 megabytes)
LVs: 0 USED PPs: 0 (0 megabytes)
OPEN LVs: 0 QUORUM: 2 (Enabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 16256 MAX PVs: 2
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
PV RESTRICTION: none INFINITE RETRY: no
DISK BLOCK SIZE: 512 CRITICAL VG: no

Creating filesystem. Logical volume name is fslvXX

root@aix:/ # smitty crfs

Add a File System

Move cursor to desired item and press Enter.

Add an Enhanced Journaled File System
Add a Journaled File System
Add a CDROM File System

F1=Help F2=Refresh F3=Cancel F8=Image
F9=Shell F10=Exit Enter=Do

Add an Enhanced Journaled File System

Move cursor to desired item and press Enter.

Add an Enhanced Journaled File System
Add an Enhanced Journaled File System on a Previously Defined Logical Volume

F1=Help F2=Refresh F3=Cancel F8=Image
F9=Shell F10=Exit Enter=Do

Add an Enhanced Journaled File System

Move cursor to desired item and press Enter.

Add an Enhanced Journaled File System
Add an Enhanced Journaled File System on a Previously Defined Logical Volume

+————————————————————————–+
| Volume Group Name |
| |
| Move cursor to desired item and press Enter. |
| |
| rootvg |
| aplicvg |
| tsmpoolvg |
| tsm55logvg |
| tsm55dbvg |
| tsmdbtmp |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
F1| /=Find n=Find Next |
F9+————————————————————————–+

Add an Enhanced Journaled File System

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
Volume group name tsmdbtmp
SIZE of file system
Unit Size Megabytes +
* Number of units [511900] #
* MOUNT POINT [/tsmdbtmp]
Mount AUTOMATICALLY at system restart? yes +
PERMISSIONS read/write +
Mount OPTIONS [] +
Block Size (bytes) 4096 +
Logical Volume for Log +
Inline Log size (MBytes) [] #
Extended Attribute Format +
ENABLE Quota Management? no +
Enable EFS? no +
Allow internal snapshots? no +
Mount GROUP []

Mount logical volume

root@aix:/ # mount /tsmdbtmp

Check filesystem size

root@aix:/ # df -m /tsmdbtmp
Filesystem MB blocks Free %Used Iused %Iused Mounted on
/dev/fslv08 511904.00 511825.51 1% 4 1% /tsmdbtmp

Advertisements

Linux EXT4-fs: error (device dm-156): ext4_lookup: deleted inode referenced: 1091357

Node : serviceguardnode2.setaoffice.com
Node Type : Intel/AMD x64(HTTPS)
Severity : minor
OM Server Time: 2016-12-22 18:22:32
Message : EXT4-fs: error (device dm-156): ext4_lookup: deleted inode referenced: 1091357
Msg Group : OS
Application : dmsg_mon
Object : EXT4
Event Type :
not_found

Instance Name :
not_found

Instruction : No

Checking which device is complaining. dm-156 is /dev/vgWPJ/lv_orawp0

root@serviceguardnode2:/dev/mapper # ls -l | grep 156
lrwxrwxrwx. 1 root root 9 Dec 14 22:15 vgWPJ-lv_orawp0 -> ../dm-156

The filesystem is currently mounted

root@serviceguardnode2:/dev/mapper # mount | grep lv_orawp0
/dev/mapper/vgWPJ-lv_orawp0 on /oracle/WPJ type ext4 (rw,errors=remount-ro,data_err=abort,barrier=0)

And the logical volume is open

root@serviceguardnode2:~ # lvs vgWPJ
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_ora11264 vgWPJ -wi-ao—- 30.00g
lv_orawp0 vgWPJ -wi-ao—- 5.00g

This is a clustered environment and it is currently running on the other node

root@serviceguardnode2:/dev/mapper # cmviewcl | grep -i wpj
dbWPJ up running enabled serviceguardnode1

There is a Red Hat note referencing the error – “ext4_lookup: deleted inode referenced” errors in /var/log/messages in RHEL 6.

In clustered environments, which is the case, if the other node is mounting the filesystem, it will throw these errors in /var/log/messages

root@serviceguardnode2:~ # cmviewcl -v -p dbWPJ

PACKAGE STATUS STATE AUTO_RUN NODE
dbWPJ up running enabled serviceguardnode1

Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual

Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up 5 0 dbWPJmon
Subnet up 10.106.10.0

Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled serviceguardnode1 (current)
Alternate up enabled serviceguardnode2

Dependency_Parameters:
DEPENDENCY_NAME NODE_NAME SATISFIED
dbWP0_dep serviceguardnode2 no
dbWP0_dep serviceguardnode1 yes

Other_Attributes:
ATTRIBUTE_NAME ATTRIBUTE_VALUE
Style modular
Priority no_priority

Checking the filesystems. I need to unmount /oracle/WPJ but first I need to umount everything under /oracle/WPJ otherwise it will show that /oracle/WPJ is busy

root@serviceguardnode2:~ # df -hP | grep WPJ
/dev/mapper/vgSAP-lv_WPJ_sys 93M 1.6M 87M 2% /usr/sap/WPJ/SYS
/dev/mapper/vgWPJ-lv_orawp0 4.4G 162M 4.0G 4% /oracle/WPJ
/dev/mapper/vgWPJ-lv_ora11264 27G 4.7G 21G 19% /oracle/WPJ/11204
/dev/mapper/vgWPJlog2-lv_origlogb 2.0G 423M 1.4G 23% /oracle/WPJ/origlogB
/dev/mapper/vgWPJlog2-lv_mirrloga 2.0G 404M 1.5G 22% /oracle/WPJ/mirrlogA
/dev/mapper/vgWPJlog1-lv_origloga 2.0G 423M 1.4G 23% /oracle/WPJ/origlogA
/dev/mapper/vgWPJlog1-lv_mirrlogb 2.0G 404M 1.5G 22% /oracle/WPJ/mirrlogB
/dev/mapper/vgWPJdata-lv_sapdata4 75G 21G 55G 28% /oracle/WPJ/sapdata4
/dev/mapper/vgWPJdata-lv_sapdata3 75G 79M 75G 1% /oracle/WPJ/sapdata3
/dev/mapper/vgWPJdata-lv_sapdata2 75G 7.3G 68G 10% /oracle/WPJ/sapdata2
/dev/mapper/vgWPJdata-lv_sapdata1 75G 1.1G 74G 2% /oracle/WPJ/sapdata1
/dev/mapper/vgWPJoraarch-lv_oraarch 20G 234M 19G 2% /oracle/WPJ/oraarch
scsWPJ:/export/sapmnt/WPJ/profile 4.4G 4.0M 4.1G 1% /sapmnt/WPJ/profile
scsWPJ:/export/sapmnt/WPJ/exe 4.4G 2.5G 1.7G 61% /sapmnt/WPJ/exe

Umounting /oracle/WPJ

root@serviceguardnode2:~ # umount /oracle/WPJ/11204
root@serviceguardnode2:~ # umount /oracle/WPJ/origlogB
root@serviceguardnode2:~ # umount /oracle/WPJ/mirrlogA
root@serviceguardnode2:~ # umount /oracle/WPJ/origlogA
root@serviceguardnode2:~ # umount /oracle/WPJ/mirrlogB
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata4
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata3
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata2
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata1
root@serviceguardnode2:~ # umount /oracle/WPJ/oraarch
root@serviceguardnode2:~ # umount /oracle/WPJ

Samba share without permission. Directory showing as d———

Server is mounting network share being exported using CIFS

root@linux:~ # df -hP /arq/avf/ROT_EFC
Filesystem Size Used Avail Use% Mounted on
//172.20.1.2/Operacao_ROT_EFC$ 43G 40G 3.2G 93% /arq/avf/ROT_EFC

root@linux:~ # mount | grep ROT_EFC
//172.20.1.2/Operacao_ROT_EFC$ on /arq/avf/ROT_EFC type cifs (rw)

Information about the filesystem on /etc/fstab

root@linux:~ # grep Operacao_ROT_EFC /etc/fstab
//172.20.1.2/Operacao_ROT_EFC$ /arq/avf/ROT_EFC cifs _netdev,user=s-ad-USER1468,pass=userpassword,uid=21376,gid=889,file_mode=0775,dir_mode=0775,domain=setaoffice,cifsacl

There is no permission and it can’t be changed by the Linux server brqsb1valeas890

root@linux:~ # ls -ld /arq/avf/ROT_EFVM /arq/avf/ROT_EFC
d——— 7 user1468 admweb 0 Sep 15 15:53 /arq/avf/ROT_EFC

root@linux:~ # chmod 775 /arq/avf/ROT_EFC
chmod: changing permissions of `/arq/avf/ROT_EFC’: Permission denied

No problem mounting manually

mount -t cifs //172.20.1.2/Operacao_ROT_EFC$ /arq/avf/ROT_EFC -o “username=s-ad-USER1468,domain=setaoffice,uid=21376,gid=889,file_mode=0775,dir_mode=0775”

Rewrote entry in /etc/fstab. Must have been a hidden character

root@linux:~ # umount /arq/avf/ROT_EFC
root@linux:~ # mount /arq/avf/ROT_EFC
root@linux:~ # ls -dl /arq/avf/ROT_EFC
drwxrwxr-x 7 user1468 admweb 0 Sep 15 15:53 /arq/avf/ROT_EFC

Linux – command ls or df hangs on /

If you run ls or df -h, these commands will appear that hung

Check if you have NFS shares mounted

root@linux:~ # mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
mdmMPC:/export/sapmnt/MPC on /sapmnt/MPC type nfs (rw,nfsvers=3,proto=tcp,noac,soft,sloppy,addr=10.106.10.118)
linuxnfs25:/oracle/HP0/sapdata1/DUMP on /dump type nfs (rw,addr=142.40.81.32)
nfshp0:/export/sapmnt/HP0/exe on /sapmnt/HP0/exe type nfs (rw,nfsvers=3,proto=udp,noac,soft,sloppy,addr=10.106.10.28)
nfshp0:/export/sapmnt/HP0/profile on /sapmnt/HP0/profile type nfs (rw,nfsvers=3,proto=udp,noac,soft,sloppy,addr=10.106.10.28)

I have this share that the server was turned off. So I tried to umount the share

root@linux:~ # umount /dump
umount.nfs: /dump: device is busy
umount.nfs: /dump: device is busy

And even forcing but no luck

root@linux:~ # umount -f /dump
umount2: Device or resource busy
umount.nfs: /dump: device is busy
umount2: Device or resource busy
umount.nfs: /dump: device is busy

umount with -l to do a lazy unmount

root@linux:~ # umount -l /dump
root@linux:~ # df -h /dump
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgroot-lv_root
7.8G 818M 6.6G 11% /

ORA-00206: error in writing (block 42, # blocks 1) of control file

DBA notified me of this error

Wed Nov 25 13:46:11 BRST 2015
Errors in file /usr/software/oracle/admin/orarvt015/bdump/rvt015_lgwr_2388.trc:
ORA-00206: error in writing (block 42, # blocks 1) of control file
ORA-00202: control file: ‘/usr/oradata/orarvt015/control2/control02.ctl’
ORA-27061: waiting for async I/Os failed
Linux-x86_64 Error: 5: Input/output error

Umount the filesystem

root@linux:~ # umount /usr/oradata/orarvt015

I ran fsck

root@linux:~ # fsck -t ext3 /dev/mapper/oradatavg-dat.orarvt015new
fsck 1.38 (30-Jun-2005)
e2fsck 1.38 (30-Jun-2005)
/dev/mapper/oradatavg-dat.orarvt015new: recovering journal
/dev/mapper/oradatavg-dat.orarvt015new: clean, 137/209715200 files, 318872229/419428352 blocks

Then I mounted the filesystem again

root@linux:~ # mount /usr/oradata/orarvt015

Showing filesystem size

root@linux:~ # df -h /usr/oradata/orarvt015
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/oradatavg-dat.orarvt015new
1.6T 1.2T 304G 80% /usr/oradata/orarvt015

It solved the problem

Exporting directory using NFS on Red Hat Enterprise Linux 6.7 and mounting on other server

I will mount a filesystem called /export/mnt/HD0/global and /export/mnt/HD0/profile being exported on linux01.

Both servers are running Red Hat Enterprise Linux 6.7

Checking filesystem that will be exported

root@linux01:~ # df -h /mnt/HD0/global
Filesystem Size Used Avail Use% Mounted on
linux01:/export/mnt/HD0/global
3.5G 23M 3.3G 1% /mnt/HD0/global

root@linux01:~ # df -h /mnt/HD0/profile
Filesystem Size Used Avail Use% Mounted on
linux01:/export/mnt/HD0/profile
3.5G 3.0M 3.3G 1% /mnt/HD0/profile

Checking which filesystems are being exported from linux01

root@linux02:~ # showmount -e linux01
Export list for linux01:
/export/mnt/HD0/exe linux01.setaoffice.com
/export/mnt/HD0/profile linux01.setaoffice.com
/export/mnt/HD0/global linux01.setaoffice.com
/export/interface/SAP linux01.setaoffice.com
/export/interface/HD0 linux01.setaoffice.com

Edit file /etc/exports on linux01 to export the share to linux02

root@linux01:~ # vi /etc/exports
/export/interface/HD0 linux01(rw,no_root_squash,sync)
/export/interface/SAP linux01(rw,no_root_squash,sync)
/export/mnt/HD0/global linux01(rw,no_root_squash,sync) linux02(rw,no_root_squash,sync)
/export/mnt/HD0/profile linux01(rw,no_root_squash,sync) linux02(rw,no_root_squash,sync)
/export/mnt/HD0/exe linux01(rw,no_root_squash,sync)

I stopped and started NFS

root@linux01:~ # /etc/init.d/nfs stop
Shutting down NFS daemon: [FAILED]
Shutting down NFS mountd: [ OK ]
Shutting down NFS quotas: [ OK ]
Shutting down NFS services: [ OK ]
Shutting down RPC idmapd: [ OK ]
root@linux01:~ # /etc/init.d/nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Starting NFS daemon: [ OK ]
Starting RPC idmapd: [ OK ]

Mounting and checking the NFS share

root@linux02:~ # mount linux01:/export/mnt/HD0/profile /mnt/HD0/profile
root@linux02:~ #

root@linux02:~ # mount linux01:/export/mnt/HD0/global /mnt/HD0/global
root@linux02:~ #

root@linux02:~ # df -h /mnt/HD0/profile
Filesystem Size Used Avail Use% Mounted on
linux01:/export/mnt/HD0/profile
3.5G 3.0M 3.3G 1% /mnt/HD0/profile

root@linux02:~ # df -h /mnt/HD0/global
Filesystem Size Used Avail Use% Mounted on
linux01:/export/mnt/HD0/global
3.5G 23M 3.3G 1% /mnt/HD0/global

Couldn’t mount AIX file system. First information says that media is not formatted

I had a problem mounting a file system after the server rebooted. At first it reported that the logical volume wasn’t formatted or the format is incorrect. Then it asked to run fsck.

root@aix5:/ # mount /fallback
Replaying log for /dev/fallback.
mount: 0506-324 Cannot mount /dev/fallback on /fallback: The media is not formatted or the format is not correct.
0506-342 The superblock on /dev/fallback is dirty.  Run a full fsck to fix.

I just ran fsck on the logical volume and then mounted the file system.

root@aix5:/ # fsck /dev/fallback

****************
The current volume is: /dev/fallback
**Phase 1 – Check Blocks, Files/Directories, and Directory Entries
**Phase 2 – Count links
**Phase 3 – Duplicate Block Rescan and Directory Connectedness
**Phase 4 – Report Problems
**Phase 5 – Check Connectivity
**Phase 7 – Verify File/Directory Allocation Maps
**Phase 8 – Verify Disk Allocation Maps
15728640 kilobytes total disk space.
63 kilobytes in 30 directories.
7664455 kilobytes in 438 user files.
8061172 kilobytes are available for use.
File system is clean.
Superblock is marked dirty; FIX? y
All observed inconsistencies have been repaired.

root@aix5:/ # mount /fallback

root@aix5:/ # df -g /fallback
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/fallback      15.00      7.69   49%      470     1% /fallback

%d bloggers like this: