Advertisements

Tag Archives: df

Linux EXT4-fs: error (device dm-156): ext4_lookup: deleted inode referenced: 1091357

Node : serviceguardnode2.setaoffice.com
Node Type : Intel/AMD x64(HTTPS)
Severity : minor
OM Server Time: 2016-12-22 18:22:32
Message : EXT4-fs: error (device dm-156): ext4_lookup: deleted inode referenced: 1091357
Msg Group : OS
Application : dmsg_mon
Object : EXT4
Event Type :
not_found

Instance Name :
not_found

Instruction : No

Checking which device is complaining. dm-156 is /dev/vgWPJ/lv_orawp0

root@serviceguardnode2:/dev/mapper # ls -l | grep 156
lrwxrwxrwx. 1 root root 9 Dec 14 22:15 vgWPJ-lv_orawp0 -> ../dm-156

The filesystem is currently mounted

root@serviceguardnode2:/dev/mapper # mount | grep lv_orawp0
/dev/mapper/vgWPJ-lv_orawp0 on /oracle/WPJ type ext4 (rw,errors=remount-ro,data_err=abort,barrier=0)

And the logical volume is open

root@serviceguardnode2:~ # lvs vgWPJ
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_ora11264 vgWPJ -wi-ao—- 30.00g
lv_orawp0 vgWPJ -wi-ao—- 5.00g

This is a clustered environment and it is currently running on the other node

root@serviceguardnode2:/dev/mapper # cmviewcl | grep -i wpj
dbWPJ up running enabled serviceguardnode1

There is a Red Hat note referencing the error – “ext4_lookup: deleted inode referenced” errors in /var/log/messages in RHEL 6.

In clustered environments, which is the case, if the other node is mounting the filesystem, it will throw these errors in /var/log/messages

root@serviceguardnode2:~ # cmviewcl -v -p dbWPJ

PACKAGE STATUS STATE AUTO_RUN NODE
dbWPJ up running enabled serviceguardnode1

Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual

Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up 5 0 dbWPJmon
Subnet up 10.106.10.0

Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled serviceguardnode1 (current)
Alternate up enabled serviceguardnode2

Dependency_Parameters:
DEPENDENCY_NAME NODE_NAME SATISFIED
dbWP0_dep serviceguardnode2 no
dbWP0_dep serviceguardnode1 yes

Other_Attributes:
ATTRIBUTE_NAME ATTRIBUTE_VALUE
Style modular
Priority no_priority

Checking the filesystems. I need to unmount /oracle/WPJ but first I need to umount everything under /oracle/WPJ otherwise it will show that /oracle/WPJ is busy

root@serviceguardnode2:~ # df -hP | grep WPJ
/dev/mapper/vgSAP-lv_WPJ_sys 93M 1.6M 87M 2% /usr/sap/WPJ/SYS
/dev/mapper/vgWPJ-lv_orawp0 4.4G 162M 4.0G 4% /oracle/WPJ
/dev/mapper/vgWPJ-lv_ora11264 27G 4.7G 21G 19% /oracle/WPJ/11204
/dev/mapper/vgWPJlog2-lv_origlogb 2.0G 423M 1.4G 23% /oracle/WPJ/origlogB
/dev/mapper/vgWPJlog2-lv_mirrloga 2.0G 404M 1.5G 22% /oracle/WPJ/mirrlogA
/dev/mapper/vgWPJlog1-lv_origloga 2.0G 423M 1.4G 23% /oracle/WPJ/origlogA
/dev/mapper/vgWPJlog1-lv_mirrlogb 2.0G 404M 1.5G 22% /oracle/WPJ/mirrlogB
/dev/mapper/vgWPJdata-lv_sapdata4 75G 21G 55G 28% /oracle/WPJ/sapdata4
/dev/mapper/vgWPJdata-lv_sapdata3 75G 79M 75G 1% /oracle/WPJ/sapdata3
/dev/mapper/vgWPJdata-lv_sapdata2 75G 7.3G 68G 10% /oracle/WPJ/sapdata2
/dev/mapper/vgWPJdata-lv_sapdata1 75G 1.1G 74G 2% /oracle/WPJ/sapdata1
/dev/mapper/vgWPJoraarch-lv_oraarch 20G 234M 19G 2% /oracle/WPJ/oraarch
scsWPJ:/export/sapmnt/WPJ/profile 4.4G 4.0M 4.1G 1% /sapmnt/WPJ/profile
scsWPJ:/export/sapmnt/WPJ/exe 4.4G 2.5G 1.7G 61% /sapmnt/WPJ/exe

Umounting /oracle/WPJ

root@serviceguardnode2:~ # umount /oracle/WPJ/11204
root@serviceguardnode2:~ # umount /oracle/WPJ/origlogB
root@serviceguardnode2:~ # umount /oracle/WPJ/mirrlogA
root@serviceguardnode2:~ # umount /oracle/WPJ/origlogA
root@serviceguardnode2:~ # umount /oracle/WPJ/mirrlogB
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata4
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata3
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata2
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata1
root@serviceguardnode2:~ # umount /oracle/WPJ/oraarch
root@serviceguardnode2:~ # umount /oracle/WPJ

Advertisements

UXMON: Volume UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 should be mounted on /srv. Please check your

I’m receiving this ticket

ATTENTION, RMC LEVEL 1 AGENT: This ticket will be automatically worked by the Automation Bus. Pls. ensure your Ticket List/View includes the “Assignee” column, monitor this ticket until the user “ABOPERATOR” is no longer assigned, BEFORE you start work on this ticket.
Node : linux.setaoffice.com
Node Type : Intel/AMD x64(HTTPS)
Severity : warning
OM Server Time: 2016-06-30 17:06:04
Message : UXMON: Volume UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 should be mounted on /srv. Please check your vfstab fstab or filesystems file. Please also check: UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8
Msg Group : OS
Application : volmon
Object : LVM
Event Type : NONE
Instance Name : NONE
Instruction : No

Checking the UXMONbroker I see that it shows that UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 is not mounted.

This server is a SUSE Linux 11 SP3

root@linux:~ # cat /etc/*release
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 3
Build no: 1565
Build date: Fri Aug 14 07:53:12 CEST 2015
Kiwi version: 7.02.58
LSB_VERSION=”core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64″

root@linux:~ # /var/opt/OV/bin/instrumentation/UXMONbroker -check volmon
Mon Jul 4 11:52:53 2016 : INFO : UXMONvolmon is running now, pid=33366
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
Finding all volume groups
Finding volume group “vg_log1_dp_14”
Finding volume group “vg_data1_dp_11”
Finding volume group “vg_log1_14”
Finding volume group “vg_log1_dp_12”
Finding volume group “vg_log1_dp_11”
Finding volume group “vg_data1_dp_13”
Finding volume group “vg_data1_dp_14”
Finding volume group “vg_data1_dp_12”
Finding volume group “vg_data1_11”
Finding volume group “vg_log1_dp_13”
Finding volume group “vg_data1_14”
Finding volume group “vg_log1_11”
Finding volume group “vg_data1_12”
Finding volume group “vg_log1_12”
Finding volume group “vg_log1_13”
Finding volume group “vg_data1_13”
Mon Jul 4 11:52:58 2016 : VOLMON: CMA(NONE,NONE) Volume UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 should be mounted on /srv .Please also check: UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8
mv: `/dev/null’ and `/dev/null’ are the same file
Mon Jul 4 11:52:58 2016 : INFO : UXMONvolmon end, pid=33366

The filesystem is mounted

root@linux:~ # df -h /srv
Filesystem Size Used Avail Use% Mounted on
/dev/dm-16 509G 14G 494G 3% /srv

This server is using btrfs

root@linux:~ # blkid /dev/dm-16
/dev/dm-16: UUID=”c7c47b25-30d8-42bc-8ca8-13f939b5c7b8″ UUID_SUB=”ebe2d68d-0b4f-4586-bd40-6476a824f170″ TYPE=”btrfs”

Rotate file /var/log/faillog, /var/log/lastlog and /var/log/tallylog

If you are having disk space problems in /var and you found that /var/log/faillog, /var/log/lastlog and /var/log/tallylog are filling up the space and you need to rotate them, you probably don’t need to rotate them

root@linux:~ # df -h /var/log
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgroot-lv_var_log
4.9G 2.0G 2.8G 42% /var/log

These are sparse files and are occupying minimal disk space. Keep looking for the offender

root@linux:~ # ls -lh /var/log/faillog
-rw——-. 1 root root 258M Apr 25 14:07 /var/log/faillog

root@linux:~ # du -sh /var/log/faillog
624K /var/log/faillog
root@linux:~ # du -h –apparent-size /var/log/faillog
258M /var/log/faillog

root@linux:~ # ls -lh /var/log/lastlog
-rw——-. 1 root root 2.3G May 2 11:05 /var/log/lastlog

root@linux:~ # du -h /var/log/lastlog
348K /var/log/lastlog
root@linux:~ # du -h –apparent-size /var/log/lastlog
2.3G /var/log/lastlog

root@linux:~ # ls -lh /var/log/tallylog
-rw——-. 1 root root 515M May 2 10:42 /var/log/tallylog

root@linux:~ # du -sh /var/log/tallylog
288K /var/log/tallylog
root@linux:~ # du -sh –apparent-size /var/log/tallylog
515M /var/log/tallylog

Source: faillog command create a huge file, like 128GB file (/var/log/faillog)

Why is the /var/log/lastlog file so large?

Linux – command ls or df hangs on /

If you run ls or df -h, these commands will appear that hung

Check if you have NFS shares mounted

root@linux:~ # mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
mdmMPC:/export/sapmnt/MPC on /sapmnt/MPC type nfs (rw,nfsvers=3,proto=tcp,noac,soft,sloppy,addr=10.106.10.118)
linuxnfs25:/oracle/HP0/sapdata1/DUMP on /dump type nfs (rw,addr=142.40.81.32)
nfshp0:/export/sapmnt/HP0/exe on /sapmnt/HP0/exe type nfs (rw,nfsvers=3,proto=udp,noac,soft,sloppy,addr=10.106.10.28)
nfshp0:/export/sapmnt/HP0/profile on /sapmnt/HP0/profile type nfs (rw,nfsvers=3,proto=udp,noac,soft,sloppy,addr=10.106.10.28)

I have this share that the server was turned off. So I tried to umount the share

root@linux:~ # umount /dump
umount.nfs: /dump: device is busy
umount.nfs: /dump: device is busy

And even forcing but no luck

root@linux:~ # umount -f /dump
umount2: Device or resource busy
umount.nfs: /dump: device is busy
umount2: Device or resource busy
umount.nfs: /dump: device is busy

umount with -l to do a lazy unmount

root@linux:~ # umount -l /dump
root@linux:~ # df -h /dump
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgroot-lv_root
7.8G 818M 6.6G 11% /

Exporting directory using NFS on Red Hat Enterprise Linux 6.7 and mounting on other server

I will mount a filesystem called /export/mnt/HD0/global and /export/mnt/HD0/profile being exported on linux01.

Both servers are running Red Hat Enterprise Linux 6.7

Checking filesystem that will be exported

root@linux01:~ # df -h /mnt/HD0/global
Filesystem Size Used Avail Use% Mounted on
linux01:/export/mnt/HD0/global
3.5G 23M 3.3G 1% /mnt/HD0/global

root@linux01:~ # df -h /mnt/HD0/profile
Filesystem Size Used Avail Use% Mounted on
linux01:/export/mnt/HD0/profile
3.5G 3.0M 3.3G 1% /mnt/HD0/profile

Checking which filesystems are being exported from linux01

root@linux02:~ # showmount -e linux01
Export list for linux01:
/export/mnt/HD0/exe linux01.setaoffice.com
/export/mnt/HD0/profile linux01.setaoffice.com
/export/mnt/HD0/global linux01.setaoffice.com
/export/interface/SAP linux01.setaoffice.com
/export/interface/HD0 linux01.setaoffice.com

Edit file /etc/exports on linux01 to export the share to linux02

root@linux01:~ # vi /etc/exports
/export/interface/HD0 linux01(rw,no_root_squash,sync)
/export/interface/SAP linux01(rw,no_root_squash,sync)
/export/mnt/HD0/global linux01(rw,no_root_squash,sync) linux02(rw,no_root_squash,sync)
/export/mnt/HD0/profile linux01(rw,no_root_squash,sync) linux02(rw,no_root_squash,sync)
/export/mnt/HD0/exe linux01(rw,no_root_squash,sync)

I stopped and started NFS

root@linux01:~ # /etc/init.d/nfs stop
Shutting down NFS daemon: [FAILED]
Shutting down NFS mountd: [ OK ]
Shutting down NFS quotas: [ OK ]
Shutting down NFS services: [ OK ]
Shutting down RPC idmapd: [ OK ]
root@linux01:~ # /etc/init.d/nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Starting NFS daemon: [ OK ]
Starting RPC idmapd: [ OK ]

Mounting and checking the NFS share

root@linux02:~ # mount linux01:/export/mnt/HD0/profile /mnt/HD0/profile
root@linux02:~ #

root@linux02:~ # mount linux01:/export/mnt/HD0/global /mnt/HD0/global
root@linux02:~ #

root@linux02:~ # df -h /mnt/HD0/profile
Filesystem Size Used Avail Use% Mounted on
linux01:/export/mnt/HD0/profile
3.5G 3.0M 3.3G 1% /mnt/HD0/profile

root@linux02:~ # df -h /mnt/HD0/global
Filesystem Size Used Avail Use% Mounted on
linux01:/export/mnt/HD0/global
3.5G 23M 3.3G 1% /mnt/HD0/global

Solaris – ls or df error: cannot canonicalize .: Permission denied

On systems that the umask is set to 027, the directory that is created to be used as a mount point appears to have the correct permissions but you can’t get the disk space utilization or list (ls -la) the permissions of the current directory (.) or the level above (..)

user@solaris:~ $ umask
027

user@solaris:~ $ df -k /backup_export
df: cannot canonicalize .: Permission denied

user@solaris:/ # ls -ld /backup_export
drwxr-xr-x 17 oracle oinstall 1024 Dec 12 15:33 /backup_export

You see that with the root user there is no problem to execute the same command you used as a regular user

root@solaris:~ # df -k /backup_export
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/dadosdg/vol_bkp
92409856 22803105 65256478 26% /backup_export

To solve this problem, umount the filesystem

root@solaris:/ # umount /backup_export

root@solaris:/ # ls -ld /backup_export
drwxr-x— 2 root root 512 Aug 13 2007 /backup_export

Now you apply the permission to make the directory browsable to regular users. I set the permissions to 777 and mounted the filesystem

root@solaris:/ # chmod 777 /backup_export

root@solaris:/ # mount /backup_export

root@solaris:/ # df -k /backup_export
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/dadosdg/vol_bkp
92409856 22803105 65256478 26% /backup_export

The permissions are back to what used to be and now the error message doesn’t appear anymore

user@solaris:~ $ df -k /backup_export
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/dadosdg/vol_bkp
92409856 22803105 65256478 26% /backup_export

user@solaris:/ # ls -ld /backup_export
drwxr-xr-x 17 oracle oinstall 1024 Dec 12 15:33 /backup_export

%d bloggers like this: