<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: arial,helvetica,sans-serif; font-size: 12pt; color: #000000'>I'm reaching way back to stuff I learned in the '80s, but it looks like the actual file is gone, but the directory entry is still there. As I recall, directory entries were file names that pointed to inodes, and the inode had pointers to the blocks of the file, permissions, etc. An inode could have multiple directory entries and these were "hard links" usually created with ln but no "-s" parameter. The file was only deleted when the last hard linked directory entry was removed. The number right after the permissions in "ls -l" was the number of hard links. Inodes not deleted but with no directory entries are what ended up in lost+found. This could happen if you deleted an open file then powered off without closing it.<div><br></div><div>It looks like these are directory entries that somehow ended up remaining after the file was deleted. You might try the "unlink" command. I'm not sure how this would have happened, though. Is it reproducible?<br><br>Scott<br><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>"James Sumners" <james.sumners@gmail.com><br><b>To: </b>"Atlanta Linux Enthusiasts - Yes! We run Linux!" <ale@ale.org><br><b>Sent: </b>Tuesday, December 11, 2012 1:00:25 PM<br><b>Subject: </b>[ale] How can I delete links that can't be seen by stat?<br><br>Check out https://www.dropbox.com/s/moq4wmeas42blu9/broken_links.png<br><br>In the screenshot, you'll see a list of links that have no properties<br>whatsoever according to `ls`. These are supposed to be hard links.<br><br>Here's the scenario:<br><br>I have an NFS mount where I send nightly backups. These nightly<br>backups use a common "full backup" and a series of differential<br>backups. I'm using rsync to do this. At some point, the nightly<br>backups failed due to low disk space and got out-of-sync. So I'm<br>removing old backups and starting anew. However, after deleting the<br>first few "old" backups I encountered this problem where `rm` can't<br>remove these files since it can't lstat() them.<br><br>Anyone know how I can delete these links?<br><br>For reference, my backup script is:<br><br>##################################################<br><br>#!/bin/bash<br><br># Pre-execution check for bsfl<br># Set options afterward<br>if [ ! -f /etc/bsfl ]; then<br> echo "Backup script requires bsfl (https://code.google.com/p/bsfl/)."<br> exit 1<br>fi<br>source /etc/bsfl<br><br>### Options ###<br><br># Set to the desired logfile path and name<br>LOG_FILE="$(dirname $0)/logs/runbackup-$(date +'%m-%d-%Y').log"<br><br># Set to the file that contains backup exclusions (format = line<br>separated paths)<br>EXCLUDES="$(dirname $0)/excludes"<br><br># Set to the NFS mount point<br># Be sure to configure /etc/fstab appropriately<br>NFS_DIR="$(dirname $0)/clyde"<br><br># Set to test string for testing NFS mount success<br>NFS_MOUNT_TEST="^clyde"<br><br># Set to the remote backup container directory<br># Backups will be stored in subdirectories of this directory<br>BACKUP_DIR="${NFS_DIR}"<br><br># Set to the email address that will recieve notifications<br># of backup failures<br>ERROR_EMAIL_ADDR="your_email_address@mail.clayton.edu"<br><br><br>### Begin actual script ###<br><br>function notify {<br> mail -s "Backup failure on $(hostname)" ${ERROR_EMAIL_ADDR} < ${LOG_FILE}<br>}<br><br># Turn on bsfl logging support<br>LOG_ENABLED="yes"<br><br># We need to be root to 1) read all files and 2) mount the NFS<br>USER=$(whoami)<br>if [ "${USER}" != "root" ]; then<br> log_error "Backup must be run as root."<br> notify<br> die 2 "Backup must be run as root."<br>fi<br><br>log "Mounting NFS"<br>mount ${NFS_DIR}<br><br>NFS_MOUNTED=$(cat /proc/mounts | grep ${NFS_MOUNT_TEST})<br>if [ ! $? -eq 0 ]; then<br> log_error "Could not mount NFS."<br> notify<br> umount ${NFS_DIR}<br> die 3 "Could not mount NFS."<br>fi<br><br># Let's make sure we have enough room on the remote system<br>STAT_INFO=$(stat -f --format='%b %a %S' ${NFS_DIR})<br>TOTAL_BLOCKS=$(echo ${STAT_INFO} | awk '{print $1}')<br>FREE_BLOCKS=$(echo ${STAT_INFO} | awk '{print $2}')<br>BLOCK_SIZE=$(echo ${STAT_INFO} | awk '{print $3}')<br># (1024bytes * 1024kilobytes) / (x bytes) = (1 megabyte [in bytes]) / (x bytes)<br># => number of blocks in 1 megabyte = y<br>REMOTE_FREE_BYTES=$(echo "${FREE_BLOCKS} / (1048576 / ${BLOCK_SIZE})" | bc -l)<br>log "Remote free bytes = ${REMOTE_FREE_BYTES}"<br><br>STAT_INFO=$(stat -f --format='%b %a %S' /)<br>TOTAL_BLOCKS=$(echo ${STAT_INFO} | awk '{print $1}')<br>FREE_BLOCKS=$(echo ${STAT_INFO} | awk '{print $2}')<br>BLOCK_SIZE=$(echo ${STAT_INFO} | awk '{print $3}')<br>LOCAL_USED_BYTES=$(echo "(${TOTAL_BLOCKS} - ${FREE_BLOCKS}) / (1048576<br>/ ${BLOCK_SIZE})" | bc -l)<br>log "Local used bytes = ${LOCAL_USED_BYTES}"<br><br>REMOTE_HAS_ROOM=$(echo "${REMOTE_FREE_BYTES} > ${LOCAL_USED_BYTES}" | bc -l)<br>if [ ${REMOTE_HAS_ROOM} -eq 0 ]; then<br> log_error "Remote system does not have enough free space for the backup."<br> notify<br> umount ${NFS_DIR}<br> die 4 "Remote system does not have enough free space for the backup."<br>else<br> log "Remote system has enough room. Proceeding with backup."<br> log "===== ===== ===== ====="<br> log ""<br>fi<br><br>if [ ! -d ${BACKUP_DIR} ]; then<br> mkdir ${BACKUP_DIR}<br>fi<br><br>DIR_READY=0<br><br>today=$(date +'%m.%d.%Y')<br>sixthday=$(date -d'-6 days' +'%m.%d.%Y')<br>if [ -d "${BACKUP_DIR}/${sixthday}" ]; then<br> # Move the sixth day to today<br> log "Moving the oldest backup to be today's backup."<br> mv "${BACKUP_DIR}/${sixthday}" "${BACKUP_DIR}/${today}" 2>&1 1>>${LOG_FILE}<br> ln -sf "${BACKUP_DIR}/${today}" "${BACKUP_DIR}/complete_backup" 2>&1<br>1>>${LOG_FILE}<br> log ""<br> DIR_READY=1<br>fi<br><br>if [ -d ${BACKUP_DIR}/${today} ]; then<br> DIR_READY=1<br> log "Today's backup directory already exists. Will update today's backup."<br> log ""<br>fi<br><br>if [ ${DIR_READY} -eq 0 ]; then<br> yesterday=$(date -d'-1 days' +'%m.%d.%Y')<br> if [ -d "${BACKUP_DIR}/${yesterday}" ]; then<br> log "Copying yeterday's backup (${yesterday}) into place for<br>differential backup."<br> cp -al "${BACKUP_DIR}/${yesterday}" "${BACKUP_DIR}/${today}" 2>&1<br>1>>${LOG_FILE}<br> log ""<br> else<br> last_backup_dir=$(ls -1 ${BACKUP_DIR} | sort -nr | head -n 1)<br> log "Copying most recent backup (${last_backup_dir}) into place<br>for differential backup."<br> cp -al "${BACKUP_DIR}/${last_backup_dir}" "${BACKUP_DIR}/${today}"<br>2>&1 1>>${LOG_FILE}<br> log ""<br> fi<br><br> DIR_READY=1<br>fi<br><br>if [ ${DIR_READY} -eq 1 ]; then<br> rsync --archive --one-file-system --hard-links --human-readable --inplace \<br> --numeric-ids --delete --delete-excluded --exclude-from=${EXCLUDES} \<br> --verbose --itemize-changes / "${BACKUP_DIR}/${today}" 2>&1 1>>${LOG_FILE}<br>else<br> log_error "Couldn't determine destination backup directory?"<br> notify<br>fi<br><br>log ""<br>log "===== ===== ===== ====="<br>log "Backup complete."<br><br>umount ${NFS_DIR}<br><br>##################################################<br><br>-- <br>James Sumners<br>http://james.roomfullofmirrors.com/<br><br>"All governments suffer a recurring problem: Power attracts<br>pathological personalities. It is not that power corrupts but that it<br>is magnetic to the corruptible. Such people have a tendency to become<br>drunk on violence, a condition to which they are quickly addicted."<br><br>Missionaria Protectiva, Text QIV (decto)<br>CH:D 59<br>_______________________________________________<br>Ale mailing list<br>Ale@ale.org<br>http://mail.ale.org/mailman/listinfo/ale<br>See JOBS, ANNOUNCE and SCHOOLS lists at<br>http://mail.ale.org/mailman/listinfo<br></div><br></div></div></body></html>