[ale] best solution for removing large # of files on NFS share.
DJPfulio at jdpfu.com
DJPfulio at jdpfu.com
Wed Jun 29 19:38:15 EDT 2022
I'd use 'find', if you want opinions. Find won't run into issues with too many files in a single directory, like rm does.
As for saving files to be deleted, that's why we have backups, right? Let someone scream when the files are gone, have their heart beat faster and freak out a little. It's good for them. ;)
If you are the admin and don't have backups, you've failed at your job, unless there is a signed piece of paper by the CIO stating the risks and accepting those risks. Keep a copy of that paper in your "save my ass" drawer.
On 6/29/22 16:14, Bob Toxen via Ale wrote:
> Yeah, I'll do a
>
> mv -i foo foo.del # short for delete
>
> on anything that might be valuable.
>
> Bob
>
> On Wed, Jun 29, 2022 at 01:08:52PM -0400, Jim Kinney wrote:
>> Having been bit by rm before, I now do a mv oldname newname on the dir to be removed and let it sit for a day or more to see who screams. Same process works in a dir to create a new dir called delete-me and mv the files into delete-me. Do a cleanout later after user screams subside.
>>
>> I only care about my efficiency. The computer exists to do my bidding. It's nice to not eat up resources on SA stuff, but if it takes "too long" to implement, my time is more valuable.
>>
>> On June 29, 2022 12:27:43 PM EDT, Bob Toxen via Ale <ale at ale.org> wrote:
>>> Efficiency is very close since neither involves a fork/exec sequence
>>> (which is very expensive).
>>> Do whichever you are most comfortable with.
>>>
>>> Since I am very paranoid about "rm -rf" in case of mistyping, I might
>>> use the find
>>> with a -name thusly:
>>>
>>> find dir1 -type f -name '*.stupid_log_file_ext' -delete
>>>
>>> I might add before the -delete and put in root's crontab:
>>>
>>> ! -mtime +365
More information about the Ale
mailing list