Tag: df

Linux – umount: /filesystem: device is busy. No processes found using lsof and fuser

root@linux:~ # df -hP | grep SCR
/dev/mapper/vgSAPlocal-lv_sapmnt_SCR 15G 2.0G 12G 15% /sapmnt/SCR
scsscr:/export/sapmnt/SCR/exe 4.2G 3.5G 473M 89% /sapmnt/SCR/exe
scsscr:/export/sapmnt/SCR/global 4.2G 362M 3.6G 9% /sapmnt/SCR/global
scsscr:/export/sapmnt/SCR/profile 2.0G 3.0M 1.9G 1% /sapmnt/SCR/profile

I unmounted the filesystems under /sapmnt/SCR

root@linux:~ # umount /sapmnt/SCR/exe /sapmnt/SCR/global /sapmnt/SCR/profile
root@linux:~ #

But I was unable to unmount /sapmnt/SCR

root@linux:~ # umount /sapmnt/SCR
umount: /sapmnt/SCR: device is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))

After using fuser and lsof, it didn’t find any processes.
I was able to unmount it after restarting the autofs service

root@linux:~ # service autofs restart
Stopping automount: [ OK ]
Starting automount: [ OK ]

UXMON: Inode utilization of /var/opt exceeds 90 threshold

root@linux:/var/opt # df -i .
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vgroot-lv_var_opt
327680 308929 18751 95% /var/opt

I found a script at this page – How to Free Inode Usage?

Create a file and then run the script to find which directory are holding the most files

#!/bin/bash
# count_em – count files in all subdirectories under current directory.
echo ‘echo $(ls -a “$1” | wc -l) $1’ >/tmp/count_em_$$
chmod 700 /tmp/count_em_$$
find . -mount -type d -print0 | xargs -0 -n1 /tmp/count_em_$$ | sort -n
rm -f /tmp/count_em_$$

Most files are inside /var/opt/OV/log/OpC

root@linux:/var/opt/ # ./inodes.sh

457 ./osit/acf/log
599 ./newid
849 ./osit/linux/log/bdf.statistics
5066 ./erm/save
172726 ./OV/log/OpC

Deleted the files that I saw were unnecessary and it decreased inode consumption

root@linux:/var/opt # df -i .
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vgroot-lv_var_opt
327680 181506 146174 56% /var/opt

Creating a jfs2 filesystem in AIX

Listing physical volumes

root@aix:/ # lspv
hdisk0 00c94ad454a2d4c5 rootvg active
hdisk1 00c94ad45808a18f rootvg active
hdisk3 00c94ad4229190d8 tsmpoolvg active
hdisk21 00ce196f4b9604c3 aplicvg active
hdisk22 00ce196f418e3f6d aplicvg active
hdisk44 00c94ad4c75dfb09 aplicvg active
hdisk2 none None
hdisk57 00c94ad481c4f2aa aplicvg active
hdisk55 00c94ad4f99f7480 tsm55dbvg active
hdisk56 00c94ad4f99f2d43 tsm55logvg active

Configures devices

root@aix:/ # cfgmgr

Listing physical volumes

root@aix:/ # lspv
hdisk0 00c94ad454a2d4c5 rootvg active
hdisk1 00c94ad45808a18f rootvg active
hdisk3 00c94ad4229190d8 tsmpoolvg active
hdisk21 00ce196f4b9604c3 aplicvg active
hdisk22 00ce196f418e3f6d aplicvg active
hdisk44 00c94ad4c75dfb09 aplicvg active
hdisk2 none None
hdisk57 00c94ad481c4f2aa aplicvg active
hdisk55 00c94ad4f99f7480 tsm55dbvg active
hdisk56 00c94ad4f99f2d43 tsm55logvg active
hdisk4 none None

Comparing lspv output, the new disk is hdisk4. Checking ID to see if it matches

root@aix:/ # lsattr -El hdisk4 | grep -i 600507680191818C1000000000000C98
unique_id 33213600507680191818C1000000000000C9804214503IBMfcp Device Unique Identification False

Using script to query all disks

for i in `lspv | awk ‘{print $1’}`
do
echo $i `lsattr -El $i | grep unique_id`
done

Creating volume group with PP SIZE 16MB is not possible

root@aix:/ # mkvg -y tsmdbtmp -s 16 hdisk4
0516-1254 mkvg: Changing the PVID in the ODM.
0516-1208 mkvg: Warning, The Physical Partition Size of 16 requires the
creation of 32000 partitions for hdisk4. The system limitation is 16256
physical partitions per disk at a factor value of 16. Specify a larger
Physical Partition Size or a larger factor value in order create a
volume group on this disk.
0516-862 mkvg: Unable to create volume group.

Creating volume group with PP SIZE 32MB.

root@aix:/ # mkvg -y tsmdbtmp -s 32 hdisk4
tsmdbtmp

Listing volume group information

root@aix:/ # lsvg tsmdbtmp
VOLUME GROUP: tsmdbtmp VG IDENTIFIER: 00c94ad400004c000000015d801914eb
VG STATE: active PP SIZE: 32 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 15999 (511968 megabytes)
MAX LVs: 256 FREE PPs: 15999 (511968 megabytes)
LVs: 0 USED PPs: 0 (0 megabytes)
OPEN LVs: 0 QUORUM: 2 (Enabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 16256 MAX PVs: 2
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
PV RESTRICTION: none INFINITE RETRY: no
DISK BLOCK SIZE: 512 CRITICAL VG: no

Creating filesystem. Logical volume name is fslvXX

root@aix:/ # smitty crfs

Add a File System

Move cursor to desired item and press Enter.

Add an Enhanced Journaled File System
Add a Journaled File System
Add a CDROM File System

F1=Help F2=Refresh F3=Cancel F8=Image
F9=Shell F10=Exit Enter=Do

Add an Enhanced Journaled File System

Move cursor to desired item and press Enter.

Add an Enhanced Journaled File System
Add an Enhanced Journaled File System on a Previously Defined Logical Volume

F1=Help F2=Refresh F3=Cancel F8=Image
F9=Shell F10=Exit Enter=Do

Add an Enhanced Journaled File System

Move cursor to desired item and press Enter.

Add an Enhanced Journaled File System
Add an Enhanced Journaled File System on a Previously Defined Logical Volume

+————————————————————————–+
| Volume Group Name |
| |
| Move cursor to desired item and press Enter. |
| |
| rootvg |
| aplicvg |
| tsmpoolvg |
| tsm55logvg |
| tsm55dbvg |
| tsmdbtmp |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
F1| /=Find n=Find Next |
F9+————————————————————————–+

Add an Enhanced Journaled File System

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
Volume group name tsmdbtmp
SIZE of file system
Unit Size Megabytes +
* Number of units [511900] #
* MOUNT POINT [/tsmdbtmp]
Mount AUTOMATICALLY at system restart? yes +
PERMISSIONS read/write +
Mount OPTIONS [] +
Block Size (bytes) 4096 +
Logical Volume for Log +
Inline Log size (MBytes) [] #
Extended Attribute Format +
ENABLE Quota Management? no +
Enable EFS? no +
Allow internal snapshots? no +
Mount GROUP []

Mount logical volume

root@aix:/ # mount /tsmdbtmp

Check filesystem size

root@aix:/ # df -m /tsmdbtmp
Filesystem MB blocks Free %Used Iused %Iused Mounted on
/dev/fslv08 511904.00 511825.51 1% 4 1% /tsmdbtmp

Linux EXT4-fs: error (device dm-156): ext4_lookup: deleted inode referenced: 1091357

Node : serviceguardnode2.setaoffice.com
Node Type : Intel/AMD x64(HTTPS)
Severity : minor
OM Server Time: 2016-12-22 18:22:32
Message : EXT4-fs: error (device dm-156): ext4_lookup: deleted inode referenced: 1091357
Msg Group : OS
Application : dmsg_mon
Object : EXT4
Event Type :
not_found

Instance Name :
not_found

Instruction : No

Checking which device is complaining. dm-156 is /dev/vgWPJ/lv_orawp0

root@serviceguardnode2:/dev/mapper # ls -l | grep 156
lrwxrwxrwx. 1 root root 9 Dec 14 22:15 vgWPJ-lv_orawp0 -> ../dm-156

The filesystem is currently mounted

root@serviceguardnode2:/dev/mapper # mount | grep lv_orawp0
/dev/mapper/vgWPJ-lv_orawp0 on /oracle/WPJ type ext4 (rw,errors=remount-ro,data_err=abort,barrier=0)

And the logical volume is open

root@serviceguardnode2:~ # lvs vgWPJ
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_ora11264 vgWPJ -wi-ao—- 30.00g
lv_orawp0 vgWPJ -wi-ao—- 5.00g

This is a clustered environment and it is currently running on the other node

root@serviceguardnode2:/dev/mapper # cmviewcl | grep -i wpj
dbWPJ up running enabled serviceguardnode1

There is a Red Hat note referencing the error – “ext4_lookup: deleted inode referenced” errors in /var/log/messages in RHEL 6.

In clustered environments, which is the case, if the other node is mounting the filesystem, it will throw these errors in /var/log/messages

root@serviceguardnode2:~ # cmviewcl -v -p dbWPJ

PACKAGE STATUS STATE AUTO_RUN NODE
dbWPJ up running enabled serviceguardnode1

Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual

Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up 5 0 dbWPJmon
Subnet up 10.106.10.0

Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled serviceguardnode1 (current)
Alternate up enabled serviceguardnode2

Dependency_Parameters:
DEPENDENCY_NAME NODE_NAME SATISFIED
dbWP0_dep serviceguardnode2 no
dbWP0_dep serviceguardnode1 yes

Other_Attributes:
ATTRIBUTE_NAME ATTRIBUTE_VALUE
Style modular
Priority no_priority

Checking the filesystems. I need to unmount /oracle/WPJ but first I need to umount everything under /oracle/WPJ otherwise it will show that /oracle/WPJ is busy

root@serviceguardnode2:~ # df -hP | grep WPJ
/dev/mapper/vgSAP-lv_WPJ_sys 93M 1.6M 87M 2% /usr/sap/WPJ/SYS
/dev/mapper/vgWPJ-lv_orawp0 4.4G 162M 4.0G 4% /oracle/WPJ
/dev/mapper/vgWPJ-lv_ora11264 27G 4.7G 21G 19% /oracle/WPJ/11204
/dev/mapper/vgWPJlog2-lv_origlogb 2.0G 423M 1.4G 23% /oracle/WPJ/origlogB
/dev/mapper/vgWPJlog2-lv_mirrloga 2.0G 404M 1.5G 22% /oracle/WPJ/mirrlogA
/dev/mapper/vgWPJlog1-lv_origloga 2.0G 423M 1.4G 23% /oracle/WPJ/origlogA
/dev/mapper/vgWPJlog1-lv_mirrlogb 2.0G 404M 1.5G 22% /oracle/WPJ/mirrlogB
/dev/mapper/vgWPJdata-lv_sapdata4 75G 21G 55G 28% /oracle/WPJ/sapdata4
/dev/mapper/vgWPJdata-lv_sapdata3 75G 79M 75G 1% /oracle/WPJ/sapdata3
/dev/mapper/vgWPJdata-lv_sapdata2 75G 7.3G 68G 10% /oracle/WPJ/sapdata2
/dev/mapper/vgWPJdata-lv_sapdata1 75G 1.1G 74G 2% /oracle/WPJ/sapdata1
/dev/mapper/vgWPJoraarch-lv_oraarch 20G 234M 19G 2% /oracle/WPJ/oraarch
scsWPJ:/export/sapmnt/WPJ/profile 4.4G 4.0M 4.1G 1% /sapmnt/WPJ/profile
scsWPJ:/export/sapmnt/WPJ/exe 4.4G 2.5G 1.7G 61% /sapmnt/WPJ/exe

Umounting /oracle/WPJ

root@serviceguardnode2:~ # umount /oracle/WPJ/11204
root@serviceguardnode2:~ # umount /oracle/WPJ/origlogB
root@serviceguardnode2:~ # umount /oracle/WPJ/mirrlogA
root@serviceguardnode2:~ # umount /oracle/WPJ/origlogA
root@serviceguardnode2:~ # umount /oracle/WPJ/mirrlogB
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata4
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata3
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata2
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata1
root@serviceguardnode2:~ # umount /oracle/WPJ/oraarch
root@serviceguardnode2:~ # umount /oracle/WPJ

UXMON: Volume UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 should be mounted on /srv. Please check your

I’m receiving this ticket

ATTENTION, RMC LEVEL 1 AGENT: This ticket will be automatically worked by the Automation Bus. Pls. ensure your Ticket List/View includes the “Assignee” column, monitor this ticket until the user “ABOPERATOR” is no longer assigned, BEFORE you start work on this ticket.
Node : linux.setaoffice.com
Node Type : Intel/AMD x64(HTTPS)
Severity : warning
OM Server Time: 2016-06-30 17:06:04
Message : UXMON: Volume UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 should be mounted on /srv. Please check your vfstab fstab or filesystems file. Please also check: UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8
Msg Group : OS
Application : volmon
Object : LVM
Event Type : NONE
Instance Name : NONE
Instruction : No

Checking the UXMONbroker I see that it shows that UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 is not mounted.

This server is a SUSE Linux 11 SP3

root@linux:~ # cat /etc/*release
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 3
Build no: 1565
Build date: Fri Aug 14 07:53:12 CEST 2015
Kiwi version: 7.02.58
LSB_VERSION=”core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64″

root@linux:~ # /var/opt/OV/bin/instrumentation/UXMONbroker -check volmon
Mon Jul 4 11:52:53 2016 : INFO : UXMONvolmon is running now, pid=33366
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
Finding all volume groups
Finding volume group “vg_log1_dp_14”
Finding volume group “vg_data1_dp_11”
Finding volume group “vg_log1_14”
Finding volume group “vg_log1_dp_12”
Finding volume group “vg_log1_dp_11”
Finding volume group “vg_data1_dp_13”
Finding volume group “vg_data1_dp_14”
Finding volume group “vg_data1_dp_12”
Finding volume group “vg_data1_11”
Finding volume group “vg_log1_dp_13”
Finding volume group “vg_data1_14”
Finding volume group “vg_log1_11”
Finding volume group “vg_data1_12”
Finding volume group “vg_log1_12”
Finding volume group “vg_log1_13”
Finding volume group “vg_data1_13”
Mon Jul 4 11:52:58 2016 : VOLMON: CMA(NONE,NONE) Volume UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 should be mounted on /srv .Please also check: UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8 UUID=c7c47b25-30d8-42bc-8ca8-13f939b5c7b8
mv: `/dev/null’ and `/dev/null’ are the same file
Mon Jul 4 11:52:58 2016 : INFO : UXMONvolmon end, pid=33366

The filesystem is mounted

root@linux:~ # df -h /srv
Filesystem Size Used Avail Use% Mounted on
/dev/dm-16 509G 14G 494G 3% /srv

This server is using btrfs

root@linux:~ # blkid /dev/dm-16
/dev/dm-16: UUID=”c7c47b25-30d8-42bc-8ca8-13f939b5c7b8″ UUID_SUB=”ebe2d68d-0b4f-4586-bd40-6476a824f170″ TYPE=”btrfs”

Rotate file /var/log/faillog, /var/log/lastlog and /var/log/tallylog

If you are having disk space problems in /var and you found that /var/log/faillog, /var/log/lastlog and /var/log/tallylog are filling up the space and you need to rotate them, you probably don’t need to rotate them

root@linux:~ # df -h /var/log
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgroot-lv_var_log
4.9G 2.0G 2.8G 42% /var/log

These are sparse files and are occupying minimal disk space. Keep looking for the offender

root@linux:~ # ls -lh /var/log/faillog
-rw——-. 1 root root 258M Apr 25 14:07 /var/log/faillog

root@linux:~ # du -sh /var/log/faillog
624K /var/log/faillog
root@linux:~ # du -h –apparent-size /var/log/faillog
258M /var/log/faillog

root@linux:~ # ls -lh /var/log/lastlog
-rw——-. 1 root root 2.3G May 2 11:05 /var/log/lastlog

root@linux:~ # du -h /var/log/lastlog
348K /var/log/lastlog
root@linux:~ # du -h –apparent-size /var/log/lastlog
2.3G /var/log/lastlog

root@linux:~ # ls -lh /var/log/tallylog
-rw——-. 1 root root 515M May 2 10:42 /var/log/tallylog

root@linux:~ # du -sh /var/log/tallylog
288K /var/log/tallylog
root@linux:~ # du -sh –apparent-size /var/log/tallylog
515M /var/log/tallylog

Source: faillog command create a huge file, like 128GB file (/var/log/faillog)

Why is the /var/log/lastlog file so large?

Linux – command ls or df hangs on /

If you run ls or df -h, these commands will appear that hung

Check if you have NFS shares mounted

root@linux:~ # mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
mdmMPC:/export/sapmnt/MPC on /sapmnt/MPC type nfs (rw,nfsvers=3,proto=tcp,noac,soft,sloppy,addr=10.106.10.118)
linuxnfs25:/oracle/HP0/sapdata1/DUMP on /dump type nfs (rw,addr=142.40.81.32)
nfshp0:/export/sapmnt/HP0/exe on /sapmnt/HP0/exe type nfs (rw,nfsvers=3,proto=udp,noac,soft,sloppy,addr=10.106.10.28)
nfshp0:/export/sapmnt/HP0/profile on /sapmnt/HP0/profile type nfs (rw,nfsvers=3,proto=udp,noac,soft,sloppy,addr=10.106.10.28)

I have this share that the server was turned off. So I tried to umount the share

root@linux:~ # umount /dump
umount.nfs: /dump: device is busy
umount.nfs: /dump: device is busy

And even forcing but no luck

root@linux:~ # umount -f /dump
umount2: Device or resource busy
umount.nfs: /dump: device is busy
umount2: Device or resource busy
umount.nfs: /dump: device is busy

umount with -l to do a lazy unmount

root@linux:~ # umount -l /dump
root@linux:~ # df -h /dump
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgroot-lv_root
7.8G 818M 6.6G 11% /

Exporting directory using NFS on Red Hat Enterprise Linux 6.7 and mounting on other server

I will mount a filesystem called /export/mnt/HD0/global and /export/mnt/HD0/profile being exported on linux01.

Both servers are running Red Hat Enterprise Linux 6.7

Checking filesystem that will be exported

root@linux01:~ # df -h /mnt/HD0/global
Filesystem Size Used Avail Use% Mounted on
linux01:/export/mnt/HD0/global
3.5G 23M 3.3G 1% /mnt/HD0/global

root@linux01:~ # df -h /mnt/HD0/profile
Filesystem Size Used Avail Use% Mounted on
linux01:/export/mnt/HD0/profile
3.5G 3.0M 3.3G 1% /mnt/HD0/profile

Checking which filesystems are being exported from linux01

root@linux02:~ # showmount -e linux01
Export list for linux01:
/export/mnt/HD0/exe linux01.setaoffice.com
/export/mnt/HD0/profile linux01.setaoffice.com
/export/mnt/HD0/global linux01.setaoffice.com
/export/interface/SAP linux01.setaoffice.com
/export/interface/HD0 linux01.setaoffice.com

Edit file /etc/exports on linux01 to export the share to linux02

root@linux01:~ # vi /etc/exports
/export/interface/HD0 linux01(rw,no_root_squash,sync)
/export/interface/SAP linux01(rw,no_root_squash,sync)
/export/mnt/HD0/global linux01(rw,no_root_squash,sync) linux02(rw,no_root_squash,sync)
/export/mnt/HD0/profile linux01(rw,no_root_squash,sync) linux02(rw,no_root_squash,sync)
/export/mnt/HD0/exe linux01(rw,no_root_squash,sync)

I stopped and started NFS

root@linux01:~ # /etc/init.d/nfs stop
Shutting down NFS daemon: [FAILED]
Shutting down NFS mountd: [ OK ]
Shutting down NFS quotas: [ OK ]
Shutting down NFS services: [ OK ]
Shutting down RPC idmapd: [ OK ]
root@linux01:~ # /etc/init.d/nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Starting NFS daemon: [ OK ]
Starting RPC idmapd: [ OK ]

Mounting and checking the NFS share

root@linux02:~ # mount linux01:/export/mnt/HD0/profile /mnt/HD0/profile
root@linux02:~ #

root@linux02:~ # mount linux01:/export/mnt/HD0/global /mnt/HD0/global
root@linux02:~ #

root@linux02:~ # df -h /mnt/HD0/profile
Filesystem Size Used Avail Use% Mounted on
linux01:/export/mnt/HD0/profile
3.5G 3.0M 3.3G 1% /mnt/HD0/profile

root@linux02:~ # df -h /mnt/HD0/global
Filesystem Size Used Avail Use% Mounted on
linux01:/export/mnt/HD0/global
3.5G 23M 3.3G 1% /mnt/HD0/global

Solaris – ls or df error: cannot canonicalize .: Permission denied

On systems that the umask is set to 027, the directory that is created to be used as a mount point appears to have the correct permissions but you can’t get the disk space utilization or list (ls -la) the permissions of the current directory (.) or the level above (..)

user@solaris:~ $ umask
027

user@solaris:~ $ df -k /backup_export
df: cannot canonicalize .: Permission denied

user@solaris:/ # ls -ld /backup_export
drwxr-xr-x 17 oracle oinstall 1024 Dec 12 15:33 /backup_export

You see that with the root user there is no problem to execute the same command you used as a regular user

root@solaris:~ # df -k /backup_export
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/dadosdg/vol_bkp
92409856 22803105 65256478 26% /backup_export

To solve this problem, umount the filesystem

root@solaris:/ # umount /backup_export

root@solaris:/ # ls -ld /backup_export
drwxr-x— 2 root root 512 Aug 13 2007 /backup_export

Now you apply the permission to make the directory browsable to regular users. I set the permissions to 777 and mounted the filesystem

root@solaris:/ # chmod 777 /backup_export

root@solaris:/ # mount /backup_export

root@solaris:/ # df -k /backup_export
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/dadosdg/vol_bkp
92409856 22803105 65256478 26% /backup_export

The permissions are back to what used to be and now the error message doesn’t appear anymore

user@solaris:~ $ df -k /backup_export
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/dadosdg/vol_bkp
92409856 22803105 65256478 26% /backup_export

user@solaris:/ # ls -ld /backup_export
drwxr-xr-x 17 oracle oinstall 1024 Dec 12 15:33 /backup_export