[ale] Tool of the Month!
Michael B. Trausch
mbt at naunetcorp.com
Fri Oct 11 17:47:44 EDT 2013
On 10/10/2013 02:11 PM, Brandon Wood wrote:
> Any specific examples you care to share with the group?
I had to backup 150 GB of stuff to BD-R. Without something "in between"
tar and the burner program, tar eventually dies (SIGPIPE) and you can't
easily resume the backup on the next media.
With splitpipe, you can say "run this program repeatedly and send (up
to) this much data to it each time, until there is no more input data".
What this means is:
* You have something like tar running to backup a very large set of
data, which is piped into splitpipe.
* Splitpipe will then run the command you specify and shunt its
standard input (from tar) to the standard input of whatever command
you give it. (In my case, I used growisofs to send raw data to a BD-R.)
* When the first media is full, splitpipe will stop accepting new data
from tar (but hold the pipe open!) and let you change media.
Because tar is blocked on writing, it will not die while waiting for
the stream to resume.
* When you tell splitpipe that the next media is ready, it will spawn
the specified command again, and resume shunting its stdin to the
command's stdin, until finished.
The concept could use some improvement, and I think I might actually go
ahead and do it becuase I've thought about it several times before:
1. A program that could take an arbitrary input and split it amongst
CD/DVD/BD media, and /only do that/, would be very useful. Then you
could do things like have end-of-media detection based on the actual
media size. (This would mean that a program would need to actually
handle the burning to CD/DVD/BD directly or with a specially-written
helper utility.)
2. In order to build the previous program a library would be written,
and that same library could be used to then write a splitpipe clone
that works as the current splitpipe does. The only real addition
would be the special-purpose single-input-to-multiple-discs utility.
3. The program could automatically handle parallel compression using as
many cores as are available (either on the local host or an entire
network). This is necessary in order to be able to burn a vast
amount of data compressed with e.g., xz in realtime. Even gzip
can't keep up with 4x BD-R rates on an 8-core system using the pigz
utility.
--- Mike
--
Naunet Corporation Logo Michael B. Trausch
President, *Naunet Corporation*
? (678) 287-0693 x130 or (855) NAUNET-1 x130
FAX: (678) 783-7843
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20131011/0191c18a/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: idgcecbf.png
Type: image/png
Size: 1701 bytes
Desc: not available
URL: <http://mail.ale.org/pipermail/ale/attachments/20131011/0191c18a/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 901 bytes
Desc: OpenPGP digital signature
URL: <http://mail.ale.org/pipermail/ale/attachments/20131011/0191c18a/attachment.sig>
More information about the Ale
mailing list