[ale] interesting problem

Putnam, James M. putnamjm at sa.edu
Thu Jan 11 15:23:09 EST 2018


   Tar (with some combination of switches) may be able to do all this for you. A
   quick test would tell.

   Upping the block size to some multiple of the native file system block size may
   let the OS DMA directly/from to user space (at least it did in SunOS/Solaris/BSD*,
   not sure if Linux does that these days) which would kill some of the tar overhead.

--
James  M. Putnam
Visiting Professor of Computer Science

The air was soft, the stars so fine,
the promise of every cobbled alley so great,
that I thought I was in a dream.
________________________________________
From: Ale [ale-bounces at ale.org] on behalf of Jim Kinney via Ale [ale at ale.org]
Sent: Thursday, January 11, 2018 3:04 PM
To: Atlanta User Group (E-mail)
Subject: [ale] interesting problem

Imagine a giant collection of files, several TB, of unknown directory names and unknown directory depths at any point. From the top of that tree, you need to cd into EVERY directory, find the symlinks in each directory and remake them in a parallel tree on the same system but in a different starting point. Rsync is not happy with the relative links so that fails as each link looks to be relative to the location of the process running rsync.

It is possible given the source of this data tree that recursive, looping symlinks exist. That must be recreated in the new location.

It looks like a find to list all symlinks in the entire tree then cd to each final location to recreate is best. That can be sped up with running multiple processes splitting the link list into sections.

Better ideas?

--

James P. Kinney III Every time you stop a school, you will have to build a jail. What you gain at one end you lose at the other. It's like feeding a dog on his own tail. It won't fatten the dog. - Speech 11/23/1900 Mark Twain http://heretothereideas.blogspot.com/



More information about the Ale mailing list